Securing AI copilots is critical -- here's how

Artificial-intelligence

The use of AI copilots is already helping businesses save time and gain productivity. One recent study found that employees who gain proficiency using copilots saved 30 minutes a day, the equivalent of 10 hours a month, while the average employee saved 14 minutes a day, or nearly five hours each month.

AI copilots essentially allow people to interact with business productivity tools for greater efficiency. You can ask these tools questions, synchronize data and perform automated actions in an easier and better way. In the survey referenced above, 70 percent of users reported greater productivity while 68 percent said it improved the quality of their work. However, while the business benefits are significant, these copilots can also introduce new security risks that organizations must be aware of -- and have a plan for.

How AI copilots are changing the game for businesses

One of the most prominent of these copilots is Copilot for Microsoft 365, which is embedded across the software suite, including Excel, Outlook and Teams. This enables the Microsoft graph connector to access external corporate data so that it funnels through Copilot. When a user is interacting with Teams or asks the AI a question or a prompt, it can then use that data to give you an answer, similar to how you would use ChatGPT. In your Teams instance, you might ask, “What's the sales forecast for this quarter?” Or “What events do we have planned for RSA coming up next week?” By acting as a connector to all of this internal data, Copilot can synthesize the information and provide a response in a way that only AI can.

However, to really reap the full benefits of Copilot for M365, you need it to be able to do things like retrieve real-time information from third-party services. That requires the use of plugins, which extends the capabilities of copilots.

You can build a plugin that can access real-time information, like finding the latest news coverage on a product launch. You could retrieve relational data such as reporting on service tickets that are assigned to a given team member.

You can also perform actions across different applications, like creating a new task in your work tracking system that's connected to your ticketing system. You use and build plugins to expand and grow the universe of this copilot to interact with data in real time but then also to connect it with other systems.

Security risks of copilots

While all of these productivity and efficiency gains are great, the use of AI copilots can also introduce significant security risks.

One such risk is around permissions. Microsoft found that over half of Copilot identities are super admins and over half of permissions are high risk. Users are creating their own plugins and loading them into workflows without administrator approval. As people are building plugins, questions arise, like “Where is data that's interacting with this copilot going?” This new area of shadow software development is an emerging blind spot for security teams that results in data leakage and exfiltration.

Let’s say you work at a bank as a financial analyst, and you're asking the copilot a question about whether to approve someone for a line of credit. The copilot would be able to access corporate data, compare it against similar net-worth people and give you an answer. In some cases, this activity is appropriate, but think of a copilot that has this level of access that can act autonomously on behalf of the user. Then, extrapolate that out with users across different geographies, teams, and business units that are interacting with and calling data from various sources, and having AI acting on their behalf to answer questions.

Further, these copilots are quite susceptible to prompt injection attacks where bad actors, or even unknowing insiders, can ask questions of the copilot that evade general usage principles. This could result in users gathering information about customers they shouldn’t see data about (a clearcut GDPR violation), and/or a bad actor getting easy access to sensitive data.

Supply chain risks are another huge attack vector as the adoption of Copilot for M365 takes off. At Build 2024, Microsoft announced the availability of thousands of connectors that are available for general use. These connectors are all routed through the Power Platform infrastructure which opens up a new blind spot and permissions that need to be managed.

There is also the ability for business users to create their own connectors and store them in a central repository that can be a black box for security leaders.  Rightfully so, there are concerns about a copilot accessing sensitive data and spitting back data that you don't need to know to answer that question. Further, there are concerns that a connector could contain malicious software, and when embedded into a workflow or an application, can dramatically increase the risk of phishing, malware downloads, and more. Many organizations are finding they aren’t prepared to prevent this. That can lead to major issues with compliance, especially when there’s data moving across different geographic zones, different databases and different cloud environments.

Another security concern stems from credential sharing. When you build a plugin, there are default settings that make it so the plugin and/or bot you are creating leverages your identity so that any time someone else uses that plugin or interacts with that copilot in this setting, it looks like it's you. You're then sharing credentials across the enterprise making it easy for attackers to move laterally and vertically until they reach their targets. A lot of these plugins grant too many people too much access and that can lead to large-scale data leakage.

Overcoming copilot security risks

Organizations need a way to allow anyone to build, iterate or integrate their own processes into AI applications, while mitigating the associated security risks.

To do this, it’s important to understand the business case for why copilots are being used in your enterprise and with that, you need insight into how your end users are interacting with the copilots and how the enterprise copilot is being embedded and iterated on in your enterprise. You also need visibility into what sensitive data flows are happening to and from that copilot, what data is being accessed and where vulnerabilities exist.

To gain this visibility requires a deep dive into each copilot. You’ll need to find which plugins and copilot use cases have access to sensitive data, have excessive permissions and expose secrets. Determine risk by establishing which plugins are overshared, which have embedded credentials, and which have access to sensitive data.

Then, add an extra layer of security by implementing guardrails by means of automated playbooks and mitigation actions to quarantine, remediate, or delete any instances where data is at risk. This doubles to help ensure that business users are customizing and integrating plugins securely. Finally, some AI solutions allow users to opt out of being included in the training set, and organizations can pay to opt out on behalf of all their users; this may make sense for certain use cases.

Toward wiser AI use

As organizations quickly adopt AI copilots for the efficiency and productivity benefits they bring, it’s key that security doesn’t fall to the wayside. These new tools bring new security risks and application security teams need to ensure they have full visibility into their use to protect the organization from risks like data leakage. The good news is you don’t have to sacrifice security for efficiency (or vice versa.) Cataloging your copilots and assessing their risk will help ensure a workplace that’s both safer and more productive.

Image Credit: Wayne Williams

Ben Kliger is CEO, Zenity.

2 Responses to Securing AI copilots is critical -- here's how

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.