Balancing AI with insider risk management

web threats

AI has officially taken off, yet organizations are squarely divided on their use in the workplace.  Organizations that encourage the use of AI and Large Language Model (LLM) tools for productivity are willing to accept the security risk but often lack the policies, education, and controls required to mitigate potential security risks, including those posed by insiders.

On the other hand, companies that take a hard line against the tools by implementing strict rules against any installation or use of AI-LLMs may cause their employees to be less productive. Fortunately, there is a middle ground that balances productivity with security and, importantly, with insider risk management.

Workforce education is key

It pays for organizations to be proactive about their cybersecurity posture, as insider risks are getting increasingly costly. In fact, the 2023 Ponemon Cost of Insider Risks Global Report found that the total average annual cost of an insider risk incident rose from $15.4 million in 2022 to $16.2 million in 2023. Interestingly, the report found that 75 percent of insider incidents are non-malicious, arising either from negligence or from being outsmarted by an outside adversary. This highlights the need for workforce education.

While powerful, AI tools, used in the wrong way, can pose a serious security risk. Many employees simply don’t realize these risks. For instance, they’re not aware of the fine print that once they input data, that data is no longer theirs. This also applies to the outputs these tools generate. There are parallels to be drawn from work done on corporate devices -- just because you build or develop something at work doesn’t mean you own it. With LLMs, once you upload data into the model, it can be shared with anyone using the tool, and most tools use uploaded content to train the model.

Providing continuous education programs on how AI tools work and how to use them safely will help ensure employees understand what they’re doing when using these tools. The most successful AI education programs focus on a human’s ability to understand context, which is something AI tools lack. The use of generative AI tools should be viewed as a partnership amongst organizations, not as an entire solution.  

Implementation of acceptable use policies

While important, training and education alone cannot prevent errors. Human error will always exist, and organizations have a responsibility to protect their employees from their own human mistakes. The key is to have the right tools and systems in place to detect when a person is doing something that might harm the company and prevent that from happening.

This demonstrates the need for organizations to enforce rules in real-time on what is and is not an acceptable use policy. This should be seen as a mechanism by everyone in the organization as a means to set the employees up for success when it comes to protecting IP and preventing data loss. In other words, offer support, not suspicion.

Organizations will benefit from leveraging insider risk tools that have built-in AI to automate teachable moments to their employees anytime they’re about to cross the line. For example, the insider risk team could have a rule set up that sends an alert if someone in the organization is accessing ChatGPT, and then copying and pasting into it. The rule can simply be an email that is automatically sent to the user explaining the acceptable use policy.

Given AI tools are constantly changing with new features added, it’s important that organizations review their policies regularly to ensure they are fit for purpose in light of any new feature changes or updates.

In addition to implementing a corporate standard tool, organizations should also discourage employees from uploading proprietary code, instead leveraging AI for testing and mock scenarios. This need not be arduous. Acceptable use policies around AI should simply adopt the same standard of the company’s existing data privacy and security policies.

Monitor tool usage as a best practice -- but uphold privacy

Once the education and implementation piece around acceptable use policies are in place, the next step is to exercise best practice security. This includes monitoring tool usage to ensure the tools being used are approved at a corporate level. While most monitoring tools will struggle to determine exactly what is or is not being uploaded, organizations can track file movement such as copy and paste activity and file system movements. Employees are often asking questions and copying and pasting data into the tools – not necessarily uploading files. If organizations don’t continuously monitor and track, they will miss the file movements around copy and paste.

Risk-adaptive data protection is an intelligent way of determining an organization’s overall behavioral risk profile. Instead of focusing on a user’s ability to potentially exfiltrate data, tracking their history of file movements and data access can more accurately determine when behaviors become risky. Importantly, upholding employee privacy is key. Content inspection raises many privacy concerns around employee satisfaction, which ultimately could lead to disgruntlement, increasing the overall risk of data loss.

There is a lot of excitement around ChatGPT and Large Language Models to see how they can boost productivity both by employees and for some companies. At the end of the day, AI tools like ChatGPT are not inherently bad – they can improve productivity in many ways -- but companies need to be thoughtful about creating acceptable use policies and best practices that mitigate insider risk.

Image creditAndreus/depositphotos.com

Lynsey Wolf is Insider Threat Analyst, DTEX Systems.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.