One in four organizations victims of AI data poisoning


A new study finds 26 percent of surveyed organizations in the UK and US have fallen victim to AI data poisoning in the past year. This is where hackers corrupt the data that trains AI systems by planting hidden backdoors, sabotaging performance, or manipulating outcomes to their advantage.
The research from information security platform IO (formerly ISMS.online) surveyed over 3,000 cybersecurity and information security managers in the UK and US, and finds that that 20 percent of organizations have also reported experiencing deepfake or cloning incidents in the last year.
Beyond deepfakes, AI-generated misinformation and disinformation tops the list of emerging threats for the next 12 months, cited by 42 percent of security professionals concerned about scams and reputational harm. Generative AI-driven phishing (38 percent) and shadow AI misuse are also on the rise, with more than a third (37 percent) of respondents reporting that employees use generative AI tools without permission or guidance, creating risks of data leaks, compliance breaches, and reputational damage.
Shadow IT in general is already an issue for 40 percent of organizations, and generative AI is exacerbating the problem, especially when it is used without human oversight. 40 percent of those who are currently facing challenges in information security cite tasks being completed by AI without human compliance checks as a key challenge.
Chris Newton-Smith, CEO of IO, says, “AI has always been a double-edged sword. While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organizations rushed in and are now paying the price. Data poisoning attacks, for example, don’t just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it’s clear we need stronger governance to protect both businesses and the public.”
Interestingly 54 percent of those surveyed now admit they deployed the technology too quickly and are struggling to scale it back or implement it more responsibly. In line with this, 39 percent of all respondents cite securing AI and machine learning technologies as a top challenge they are currently facing, up sharply from nine percent last year. Meanwhile, 52 percent state that AI and machine learning are hindering their security efforts.
AI is also becoming part of defensive efforts, 79 percent of UK and US organisations are using AI, machine learning, or blockchain for security, up from just 27 percent in 2024. A further 96 percent have plans to invest in GenAI-powered threat detection and defense, 94 percent will roll out deepfake detection and validation tools, and 95 percent are committing to AI governance and policy enforcement in the year ahead.
Newton-Smith adds, “The UK’s National Cyber Security Centre has already warned that AI will almost certainly make cyberattacks more effective over the next two years, and our research shows businesses need to act now. Many are already strengthening resilience, and by adopting frameworks like ISO 42001, organizations can innovate responsibly, protect customers, recover faster, and clearly communicate their defenses if an attack occurs.”
You can get the full report from the IO site.
Image credit: NewAfrica/depositphotos.com