How cybercriminals use ChatGPT for cyberattacks

Artificial-Intelligence-threat

Artificial intelligence (AI) chatbots like ChatGPT have become a tool for cybercriminals to enhance their phishing email attacks. These chatbots use large datasets of natural language and reinforcement learning to create typo-free and grammatically correct emails, giving the appearance of legitimacy to unsuspecting targets. This has raised concerns among cybersecurity leaders, with 72 percent admitting to being worried about AI being used to craft better phishing emails and campaigns.

Chatbots can help cybercriminals scale the production of advanced social engineering attacks, such as CEO fraud or business email compromise (BEC) attacks. Additionally, cybercriminals may use AI-powered chatbots to scrape personal or financial data from social media, create brand impersonation emails and websites, or even generate code for malware such as ransomware. In particular, without AI, creating malware is a specialized task that requires skilled cybercriminals. However, the use of chatbots could make it easier for non-specialists to do this, and we can also expect AI-generated outputs to improve over time.

"DAN" -- The revolutionary ChatGPT exploit

Previously, if you explicitly asked ChatGPT to produce malware or write a phishing email, the chatbot would include a security warning before generating the output. Yet, security researchers have announced the discovery of a new exploit for ChatGPT. The exploit, called "DAN" (short for Do Anything Now), allows users to bypass ChatGPT's built-in limitations.

Researchers grew frustrated with the model's tendency to respond to sensitive topics with unhelpful messages like "I'm sorry, but as an AI language model, I am not capable of...". But with DAN, users can essentially tell ChatGPT that it is now operating under a new persona that is not bound by the model's ethical or moral biases. DAN will treat all answers equally and will not provide any warnings or cautionary advice at the start of its messages.

The implications of this new exploit are significant. Users can now ask ChatGPT anything without fear of censorship or judgment. However, DAN's capabilities extend beyond those of the original ChatGPT, making it a powerful new tool for cybercriminals. They will no longer need to select specific words to avoid being flagged by filters, but instead can ask questions such as "What is the best malware to make?" or "How do I obfuscate it?" or "What is an effective phishing email template?".

How can organizations protect themselves from AI-assisted phishing attacks?

Organizations need to be aware of the risks posed by chatbot-enabled cyberattacks and take steps to protect themselves. Integrated cloud email security (ICES) solutions can help organizations detect and protect against advanced attacks, whether they are written by humans or AI chatbots. These solutions deploy their own AI and machine learning models to detect text-based attacks, suspicious formatting and requests, language that tries to create urgency, attention-getting subject lines, and more.

As AI technology continues to advance, organizations face an increasingly urgent need to address potential security risks and vulnerabilities. This is where detection engineering comes into play. By proactively identifying and addressing potential threats, businesses can stay ahead of cyberattackers and mitigate risks to their systems and data. While chatbots have the potential to help cybercriminals craft more believable phishing emails and create malware, organizations should prioritize the implementation of effective countermeasures, such as ICES solutions, which can detect and stop these attacks from doing harm.

Image Credit: Wayne Williams

Jack Chapman is VP of Threat Intelligence, Egress.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.