The key to an effective generative AI strategy: Human oversight

AI-Security-model

Generative artificial intelligence (AI) systems have witnessed significant advancements in recent years, offering remarkable capabilities in a variety of domains.

Generative AI is a powerful tool that can be used for both good and bad. Threat actors have been employing the latest technology to harm businesses in various ways for decades, but organizations have and must continue to find ways to use this same technology to their advantage, and ultimately outsmart these digital thieves as AI is becoming more accessible and accepted.

Along with the benefits of generative AI within the realm of access to shared knowledge and efficiency, these systems also pose significant threats to the cyber strategies of many businesses. The ability of generative AI to create realistic and convincing content, coupled with its capacity to learn from vast amounts of data, introduces new challenges and vulnerabilities that can be exploited by an array of malicious actors.

Let’s break down a few of the key ways that generative AI is enabling bad actors to infiltrate organizations, while also addressing a few methods in which businesses can implement fail safes to protect their data, employees, and customers.

Threat automation at scale

Generative AI is empowering malicious parties through automation at scale. This can propel any initiative that would normally take many hours to create, and can now simply require a few written prompts. These scams include malicious phishing emails, fake news articles, or other communication that can be distributed in bulk, impersonating voice and tone, and targeting specific groups.

Hyper-realistic imitation

Threat actors can also utilize Generative AI through hyper-realistic fakes. It can be used to mimic biometrics through photo/video simulation and voice duplication, not just written text. These audio-visual samples can be taken from a cloud database or an upload from the internet. You’ve likely witnessed examples of this technology through memes and videos of public figures and celebrities across social media platforms.

Synthesizing code

Generative AI can even create and rewrite code. This means that your malware or virus code -- which is carefully curated, engineered, and released into the wild -- can be duplicated or combatted with additional code to hinder it from functioning properly. This is an opportunity for makers of virus scanning technologies, pushing them to integrate AI and machine learning into their products in an effort to block and tackle these harmful prompts.

Keys to protect your business

So now you may be asking: "Where do organizations go from here? How can businesses protect themselves from these types of malicious actors?"

Our defense against these digital scammers is not going to be by declaring a moratorium on the use of AI, but in doubling down on versatility, usefulness, and the ability to counter these threats in a more efficient way than a human engineer monitoring systems 24/7.

This means that the cybersecurity engineer is the person responsible for enforcing the safety and ethical standards that keep the guardrails around AI. So even though AI is capable of detecting small incidents and anomalies that the engineer might not normally see, the human is still the fail safe device. They are there to ensure the safety standards and best practices that have been established in identity and access management (IAM) are maintained, as well as to conduct rigorous testing and auditing of AI. Human oversight is absolutely essential to understanding these complex AI systems.

As organizations think about other ways in which they can protect themselves against harmful AI and train their workforce on AI best practices, they should focus on what data should and should not be shared with these systems. For example: something the general public may not know is that once AI has been provided with information, those systems then own that data and the user no longer has authority over how it  is used going forward. Training employees on what should and should not be shared with these smart systems can be a key factor in protecting the integrity of the business.

Additionally, multi-factor authentication (MFA) continues to be the most effective tool in combating these malicious initiatives. Now in today’s landscape, MFA should encompass biometric, password authentication, and known facts of the user. All of these factors add the needed extra layer of security to users and organizations, making it more difficult for attackers to gain access to accounts even if they have stolen one piece of the puzzle.

In today’s technological landscape, every account should be treated as a privileged account. Least privilege access means that users are only given access to the resources that they need to do their job. This helps to reduce the risk of unauthorized access and ultimately protects businesses from bad actors trying to achieve access to the most protected data.

As we move forward into a more immersed world of AI, these technologies are going to continue to be transformative for society, just as the internet was after its creation -- ultimately impacting the economy, jobs, education, and everyday tasks. It’s important to remember that these technologies will continue to be used by both good and bad parties, but it’s up to humans to establish and enforce the guardrails around how these generative AI systems will function and impact human lives.

Dr. James Quick is Identity Advisor & Director of Legal Technology, Simeio.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.