Why you should worry more about SaaS than generative AI [Q&A]

SaaS

There's a lot of talk at the moment about how the use of AI opens up businesses to additional risk. But is AI itself the issue or is it the way in which it's integrated with other applications?

We talked to Ran Senderovitz, chief operating officer of Wing Security, to discuss how enterprises need to re-prioritize their security to address the real attack surface, the SaaS apps that leverage AI.

BN: What is the relationship between generative artificial intelligence (AI) and Software-as-a-Service (SaaS) applications, and how do they work together?

RS: AI is fueling new levels of efficiency for SaaS vendors and their users, but it also increases the risk of company know-how and intellectual property (IP) leakage, far surpassing the risks associated with simple data loss. In 2023, the rapid adoption of AI tools by teams and companies seeking competitive or creative advantages marked a significant trend. This coincided with the expansion of SaaS landscapes, as employees integrated new applications to improve efficiency. Notably, 99.7 percent of organizations in Wing's recent research utilize SaaS applications that incorporate AI to enhance their services. More than 13 percent of all applications at a typical organization are already powered by AI.

This concurrent rise in AI and SaaS usage is expected, given that most AI tools, particularly generative AI applications, are SaaS-based. However, there's a concerning aspect to this trend. Many organizations overlook the rights and permissions these applications require from users. Put simply, many of these applications are accessing your IP, and it's very possible that someone in your organization agreed to their terms -- allowing them to learn your data without clear restrictions on that usage. This is alarming. Therefore, it's crucial to maintain vigilant oversight in such an evolving digital landscape and monitor these applications to protect against proprietary information leaks.

BN: What are the top misconceptions between generative AI and SaaS when it comes to security?

RS: Firstly that it is easy to identify risky AI applications, they are very famous. Many vendors think they can spot risky AI applications simply because they're well known. However, the truth is different. A significant number of SaaS service providers are already utilizing AI or have plans to do so. Our research shows that currently, 8 percent of all SaaS applications in organizations incorporate AI. This situation exacerbates the risks associated with Shadow-IT, now extended to Shadow-AI. It's crucial for organizations to uncover and maintain complete visibility of their AI-based SaaS suppliers to mitigate these risks.

Second that there is nothing we can do to stop SaaS from learning. Many CISOs feel that if employees or business units need a SaaS service, these apps are too powerful and surrendering knowledge and data is just part of doing business. This is a risky mindset. It's not something you'd accept from a human service supplier or contractor. Fortunately, many applications now offer tiers that control data learning: 30 percent of top generative AI tools won't learn from your data and 55 percent offer an opt-out option for training participation. Solutions include asking the service provider to exclude your data from their learning processes, training your workforce to share less with the apps and sharing the risk with the business unit to make informed risk-reward decisions.

And third that the CISO is solely responsible for preventing AI-based SaaS IP leaks. SaaS deployment is a decentralized process where any employee can freely experiment with different services. This makes a centralized control approach ineffective. Restricting access through the Identity and Access Management (IAM/OAuth) systems often leads employees to use alternative methods, such as personal accounts. This puts them at risk of choosing unreliable suppliers (for example, there are nearly 400 services with 'ChatGPT' in their name not affiliated with OpenAI) or inadvertently granting inappropriate access to information. However, there are emerging solutions in the market designed to facilitate collaboration between security teams, legal departments, business units and employees, enabling organic and effortless risk reduction.

BN: What are the challenges organizations face when seeking to use generative AI securely?

RS: With hundreds and sometimes thousands of SaaS applications used in a typical organization, there is no practical way to manually identify which SaaS applications use AI, constantly track their evolving learning capabilities, track their contracts for consent on learning your data and seek effective and low-cost collaboration with Legal and BU to reduce risk. It's just too much for humans. That is where SaaS security technology comes into play.

In addition, Generative AI is part of SaaS. Therefore, if a company's SaaS landscape is not secure from the get-go, they are putting themselves at greater risk by introducing more third-party SaaS applications. ChatGPT is a great example of this. In June 2023, it was reported that over 100,000 hosts were compromised because of ChatGPT. These types of information-stealing attacks are not going anywhere, and when they happen, companies need to be alert and take action. Employees and security teams need to evaluate what types of sensitive company information they are putting into these AI applications and if it's truly worth the risk. This will be the most pressing challenge as companies adapt generative AI into their practices.

BN: What are the riskiest use cases of generative AI within SaaS applications that Wing Security has observed recently?

RS: As I have mentioned above, the two riskiest paths are as follows:

  • Data and IP mismanagement: This is where sensitive data is exposed due to oversight, particularly when users unknowingly contribute information to training models. For instance, Forrester predicts that in 2024, an application utilizing ChatGPT may face fines for mishandling personally identifiable information (PII). Comparatively, sharing data with a learning AI application is akin to sharing it with a human contractor, emphasizing the importance of clearly defining the rules regarding what the contractor or AI is permitted to do with the knowledge after leaving your facility.
  • App Impersonation: In the eagerness to try out new AI-based tools, employees in the organizations may overlook thorough third-party risk management (TPRM) and supplier quality control. This can lead to connections with fraudulent, malicious apps and web extensions, known as 'doppelgänger apps,' which are crafted to deceive users.

BN: How can organizations/security leaders reprioritize their security to ensure the safe use of AI within SaaS applications?

RS: The good news is that with the right technology, adding automatic control for AI-based SaaS leaks does not need to trigger any significant re-prioritization of resources. Utilizing SaaS Security Posture Management (SSPM) solutions with the right automation features is crucial. Remarkably, it can take less than a handful of hours per month for a single employee to set your organization on the right track.

These solutions empower security teams to effectively support employees and business units in managing AI-based SaaS applications. They monitor the usage of AI applications, constantly evaluating who is using AI applications and how they are being used. They initiate decisions only when business terms or application capabilities change, supply the necessary data and then delegate the decision-making to the appropriate organizational function. Following that, they automatically enforce the CISO's policy decisions across the organization.

This approach simplifies a complex problem with an easy-to-deploy solution. By adopting the right tools, organizations can involve their entire team in enhancing their business security posture.

Photo credit: Alexander Supertramp / Shutterstock

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.