Over half of gen AI inputs contain PII and sensitive data

In a new report on the impact of generative AI on security posture, Menlo Security looks at employee usage of gen AI and the subsequent security risks these behaviors pose to organizations.

It finds that 55 percent of data loss prevention events detected by Menlo Security in the last thirty days included attempts to input personally identifiable information. The next most common type of data that triggered DLP detections included confidential documents, which represented 40 percent of input attempts.

The report also finds an 80 percent increase in attempted file uploads to generative AI websites. Researchers attribute this increase partly to the many AI platforms that have added file upload features within the past six months. While copy and paste attempts to generative AI sites decreased a little it’s still a frequent occurrence, highlighting the need to implement technology to control these actions.

"Our latest report highlights the swift evolution of generative AI, outpacing organizations' efforts to train employees on data exposure risks and update security policies," says Pejman Roshan, chief marketing officer at Menlo Security. "While we've seen a commendable reduction in copy and paste attempts in the last six months, the dramatic rise of file uploads poses a new and significant risk. Organizations must adopt comprehensive, group-level security policies to effectively eliminate the risk of data exposure on these sites.”

In the last six months, the Menlo Labs Threat Research team discovered a 26 percent increase in organizational security policies for generative AI sites. However, the majority are doing so on an application by application basis rather than by establishing policies across generative AI applications as a whole.

For organizations that have security policies on an application basis, 92 percent have security-focused policies in place around generative AI usage while eight percent allow unrestricted generative AI usage. For organizations that have security policies on generative AI apps as a group, 79 percent have security-focused security policies in place while 21 percent allow unrestricted usage

The full report is available from the Menlo site.

Image credit: Skorzewiak/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.