Businesses ignore security when deploying AI

A new report from Orca Security highlights that, as organizations invest in AI innovation, most of them are doing so without regard for security.

The report uncovers a wide range of AI risks, including exposed API keys, overly permissive identities, misconfigurations, and more.

For example, 45 percent of Amazon SageMaker buckets are using easily discoverable non-randomized default bucket names, and 98 percent of organizations have not disabled the default root access for Amazon SageMaker notebook instances.

It finds that 62 percent of organizations have deployed an AI package with at least one CVE. AI packages enable developers to create, train, and deploy AI models without developing brand new routines, but a clear majority of these packages are susceptible to known vulnerabilities.

In addition 98 percent of organizations using Google Vertex AI have not enabled encryption at rest for their self-managed encryption keys. This leaves sensitive data exposed to attackers, increasing the chances that a bad actor can exfiltrate, delete, or alter the AI model.

"Eagerness to adopt AI tooling is leading organizations to needlessly increase their risk level by overlooking simple security steps," says Gil Geron, CEO and co-founder at Orca Security. "The heavy reliance on default settings, and willingness to deploy packages with known vulnerabilities, is telling. The rush to take advantage of AI has organizations skipping the security basics and leaving clear paths to attack open to adversaries."

Among other findings 56 percent of companies have adopted their own AI models to build custom applications and integrations specific to their environment(s). Azure OpenAI is currently the front runner among cloud provider AI services (39 percent); Sckit-learn is the most used AI package (43 percent) and GPT-35 is the most popular AI model (79 percent).

"Orca's 2024 State of AI Security Report provides valuable insights into how prevalent the OWASP Machine Learning Security Top 10 risks are in actual production environments," says Shain Singh, project co-lead of the OWASP ML Security Top 10. "By understanding more about the occurrence of these risks, developers and practitioners can better defend their AI models against bad actors. Anyone who cares about AI or ML security will find tremendous value in this study."

You can find out more and get the full report on the Orca blog.

Image credit: Leowolfert/Dreamstime.com

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.