Shadow AI a major concern for enterprise IT


A new report reveals that nearly 80 percent of IT leaders say their organization has experienced negative outcomes from employee use of generative AI, including false or inaccurate results from queries (46 percent) and leaking of sensitive data into AI (44 percent).
Notably the survey of 200 US IT directors and executives from Komprise shows that 13 percent say that these poor outcomes have also resulted in financial, customer or reputational damage.
The risks and rewards of shadow AI [Q&A]


As with other forms of 'off the books' shadow tech, used by employees without company approval, shadow AI is a double-edged sword.
Cyberhaven Labs recently reported a sharp 485 percent increase in corporate data flowing to AI systems, with much of it going to risky shadow AI apps.
Unmasking the impact of shadow AI -- and what businesses can do about it


The AI era is here -- and businesses are starting to capitalize. Britain’s AI market alone is already worth over £21 billion and expected to add £1 trillion of value to the UK economy by 2035. However, the threat of “shadow AI” -- unauthorized AI initiatives within a company -- looms large.
Its predecessor -- “shadow IT” -- has been well understood (albeit not always well managed) for a while now. Employees using personal devices and tools like Dropbox, without the supervision of IT teams, can increase an organization’s attack surface -- without execs or the C-suite ever knowing. Examples of shadow AI include customer service teams deploying chatbots without informing the IT department, unauthorized data analysis, and unsanctioned workflow automation tools (for tasks like document processing or email filtering).