Exploring the security risks underneath generative AI services

Artificial intelligence has claimed a huge share of the conversation over the past few years -- in the media, around boardroom tables, and even around dinner tables. While AI and its subset of machine learning (ML) have existed for decades, this recent surge in interest can be attributed to exciting advancements in generative AI, the class of AI that can create new text, images, and even videos. In the workplace, employees are turning to this technology to help them brainstorm ideas, research complex topics, kickstart writing projects, and more.

However, this increased adoption also comes with a slew of security challenges. For instance, what happens if an employee uses a generative AI service that hasn’t been vetted or authorized by their IT department? Or uploads sensitive content, like a product roadmap, into a service like ChatGPT or Microsoft Copilot? These are some of the many questions keeping security leaders up at night and prompting a need for more visibility and control over enterprise AI usage.

One important step leaders can take toward this goal is developing a strategy to properly assess the risks that different generative AIs, and the underlying models powering them, pose to their organizations. Thankfully, technology is becoming available that makes it simpler than ever to evaluate these risks and inform AI policies.

Service vs. Model: What’s the Difference?

When we think of a generative AI application, the “service” encompasses the entire package of capabilities on the interface that users interact with (e.g., its online platform or mobile app). Meanwhile, its underlying models, such as large language models (LLMs), are the complex AI or ML algorithms running "under the hood" to make the service functional. Importantly, there can be multiple models powering a single AI service, especially in those that perform a variety of different tasks.

Why Assess Both?

Evaluating the risk of both the service and its models helps leaders get the full picture of its security pros and cons. For example, since the service includes the user interface, there are “front-end” security considerations, such as the application’s user access and data privacy controls. However, the security of its underlying model is a key indicator of how safe the entire service is, and is therefore a component organizations need to carefully evaluate. This is especially true considering that a service could look secure from “the outside,” but have serious flaws in its computing engine.

Common Model Risks

Generative AI models, like LLMs, rely solely on their training data to generate intelligent outputs. Unfortunately, this means that any flaws or corruption in the training data will negatively impact the reliability and safety of the application. For example, depending on the degree of bias in an LLM’s training data, the outputs can perpetuate certain stereotypes or viewpoints and, in sensitive use cases, even harm users. Similarly, LLMs can generate “toxic” content that is harmful or inappropriate. Toxicity can stem from biases in training data or be a result of the model incorrectly contextualizing queries.

Additionally, some LLMs can be jailbroken, meaning users are able to bypass or override the safety or ethical constraints built into the models. More secure models are regularly tested and fine-tuned to be able to resist these attempts. Finally, they can be used to create malware or accelerate other cyberattacks. For instance, a hacker may leverage an AI tool to create a realistic phishing email quickly, without the telltale spelling or grammar issues that used to indicate phishing attempts in the past.

Using Technology to Promote Responsible AI Usage

With these risks in mind, it’s clear that the use of generative AI services, and thereby, models like LLMs, has security implications for organizations. While some leaders may be tempted to block these applications outright, doing so could hamper efficiency, innovation, and creativity internally. So how can they find a safe middle ground?

First, they can seek out products with features that easily bubble up AI service risks and model attributes, allowing them to make more informed decisions around the internal use of AI. By taking advantage of these types of risk assessment capabilities, they’ll see positive outcomes, such as increased data protection and compliance, while also avoiding negative consequences like data loss, fees or regulatory fines, or contributing to the spread of false or harmful information.  

It isn’t enough to only assess the risk of the service itself, since the AI engine running underneath it could have separate vulnerabilities. When leaders understand this subtle distinction and utilize technology properly to gather these insights, they’ll be able to create policies that are helpful for both employees and the security of the business.

Image Credit: Skypixel / Dreamstime.com

Thyaga Vasudevan is Executive Vice President of Product at Skyhigh Security.

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.