AI boosts growth in 'synthetic' identity fraud


A new survey of 500 financial executives in the US shows a 17 percent increase in 'synthetic' identity fraud cases over the past two years, with more than a third of professionals reporting a significant surge of 20 to 50 percent.
The study by Wakefield Research for Deduce finds that despite the industry investing in fraud prevention, 52 percent of experts believe that fraudsters are adapting faster than defenses can keep up.
Generative AI sees rapid adoption in the enterprise


Generative AI has seen rapid adoption in the enterprise with 67 percent of respondents to a new study reporting that their companies are currently using generative AI, and 38 percent of this group saying that their companies have been working with AI for less than a year.
The report from O'Reilly shows many are still in the early stages of the AI journey, however. 18 percent report having applications in production, but there are multiple bottlenecks for enterprises looking to implement these technologies. First is identifying appropriate use cases (53 percent), followed by legal issues, risk, and compliance (38 percent).
Enterprises lack in-house skills for generative AI adoption


Only 38 percent of executives say their organization has the in-house expertise to adopt generative AI for innovation, according to a new study from the IBM Institute for Business Value.
Generative AI promises to upgrade ecosystem innovation by transforming the entire workflow. A large majority of executives say generative AI will greatly improve ideation (80 percent), discovery (82 percent), collaboration with partners for innovation (77 percent), and innovation execution (74 percent).
ChatGPT can make fully playable 'choose your own adventure' games

OpenAI's big announcement: Why enterprises should pay attention


OpenAI held its first dev day conference last week, and announcements there made huge waves in technology and startup circles. But it’s enterprises that should be paying attention, and here’s why:
OpenAI made significant improvements to ChatGPT -- ones that address critical flaws that made it unsuitable for enterprise use cases because the results were inaccurate, non-credible and untrustworthy. What’s changed is that OpenAI has integrated retrieval augmented generation (RAG) into ChatGPT.
IT leaders want action on AI


A new report shows that 83 percent of IT leaders believe GenAI technology will transform every facet of society and business, with 78 percent saying that understanding its potentially disruptive impact is a top business priority.
The study from Appsbroker and CTS shows 86 percent of organizations have already been impacted by GenAI, with a better understanding of the potentially disruptive impacts of the technology a top priority for 78 percent of them.
Bing Chat is reborn as Copilot as Microsoft continues its AI push with a rebrand


Microsoft has announced a rebranding of Bing Chat and Bing Chat Enterprise to Copilot as part of its "vision to bring Microsoft Copilot to everyone". There is now a lengthy list of products under the Copilot banner, including Microsoft Copilot and Copilot in Windows, but this latest rebrand feels a little odd.
Despite describing Bing as "our leading experience for the web" Microsoft has opted to ditch much of the Bing branding as it embraces Copilot more fully. The company has also revealed that Copilot will become generally available on December 1.
ChatGPT one year on: Why IT departments are scrambling to keep up


We’re nearly one year on since ChatGPT burst onto the scene. In a technology world full of hype, this has been truly disruptive and permanently changed the way we work. It has also left IT departments scrambling to keep up – what are the risks of using AI? Can I trust the apps with my data?
Should we ban altogether or wait, and see? But if we ban it, is there a risk of being left behind as other companies innovate?
Embracing the future: How AI is transforming security and networking


Network management and security should go hand in hand. However, making these services work has become more complicated and riskier due to the growth of the public cloud, the use of software applications, and the need to integrate different solutions together.
This complex network security domain requires more skilled cybersecurity professionals. But as this need becomes obvious, so does the glaring skills gap. In the UK, half of all businesses face a fundamental shortfall in cybersecurity skills, and 30 percent grapple with more complex, advanced cybersecurity expertise deficiencies.
Only 14 percent of enterprises are ready for AI


New research from Cisco reveals that just 14 percent of organizations globally are fully prepared to deploy and leverage AI-powered technologies.
The company’s first AI Readiness Index surveyed over 8,000 global companies, and was developed in response to the accelerating adoption of AI, a generational shift that is impacting almost every area of business and daily life.
Organizations flock to generative AI despite security concerns


A new survey of over 900 global IT decision makers shows that although 89 percent of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95 percent are already using them in some form within their businesses.
The research for Zscaler, carried out by Sapio Research, also reveals 23 percent of those using GenAI aren't monitoring the usage at all, and 33 percent have yet to implement any additional GenAI-related security measures -- though many have it on their roadmap.
Get 'The AI Product Manager's Handbook' (worth $35.99) for FREE


The AI Product Manager's Handbook is for people that aspire to be AI product managers, AI technologists, and entrepreneurs, or for people that are casually interested in the considerations of bringing AI products to life.
It should serve you if you’re already working in product management and you have a curiosity about building AI products.
GenAI and its hallucinations: A guide for developers and security teams


With the rapid proliferation of Generative AI (GenAI), developers are increasingly integrating tools like ChatGPT, Copilot, Bard, and Claude into their workflows. According to OpenAI, over 80 percent of Fortune 500 companies are already using GenAI tools to some extent, whilst a separate report shows that 83 percent of developers are using AI-tools to speed up coding.
However, this enthusiasm for GenAI needs to be balanced with a note of caution as it also brings a wave of security challenges that are easily overlooked. For many organizations, the rapid adoption of these tools has outpaced the enterprise's understanding of their inherent security vulnerabilities. This would yield a set of blocking policies for example, Italy had at one point this year completely blocked usage of GPT, which is never the answer.
This misalignment could not only compromise an organization’s data integrity but also impact its overall cyber resilience. So, how should AppSec teams, developers, and business leaders respond to the security challenges that accompany the widespread use of GenAI?
Why ChatGPT won't solve your real-time translation needs


New technologies debut almost every day. This constant barrage of novel tools creates a perpetual cycle of overshadowing -- someone is always introducing a new technology that eclipses the previous innovation, and then something even newer comes out, and the cycle repeats itself. However, OpenAI’s ChatGPT broke that cycle.
Since ChatGPT’s debut in late 2022, the generative AI tool has exploded in popularity. It took just two months for the platform to reach 100 million users, a speed that shattered the previous record for fastest-growing app. The creators of ChatGPT expect the tool to generate $200 million this year and project that number will grow to $1 billion next year. Other businesses, like Google and Grammarly, are taking note. Both of these organizations have developed their own generative AI tool to enhance their business operations.
Governance and security are top priorities for data teams


With ever more organizations rushing to adopt AI solutions, a new report suggests that implementing stronger data governance and security controls will be a higher priority for data teams as we head into 2024.
The report from data security company Immuta finds that only half of respondents say their organization's data security strategy is keeping up with AI's rate of evolution.
Recent Headlines
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.