Cybercriminals lure LLMs to the dark side

A new AI security report from Check Point Software shows how cybercriminals are co-opting generative AI and large language models (LLMs) in order to damage trust in digital identity.

At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake.

"The swift adoption of AI by cyber criminals is already reshaping the threat landscape," says Lotem Finkelstein, director of Check Point Research. "While some underground services have become more advanced, all signs point toward an imminent shift -- the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behavior. It's not a distant future -- it's just around the corner."

The report uncovers four core areas where this erosion of trust is most visible:

  • AI-enhanced impersonation and social engineering where t Threat actors use AI to generate realistic, real-time phishing emails, audio impersonations, and deepfake videos. Attackers recently mimicked Italy’s defense minister using AI-generated audio, demonstrating that no voice, face, or written word online is safe from fabrication.
  • LLM data poisoning and disinformation where malicious actors manipulate AI training data to skew outputs. A case involving Russia’s disinformation network Pravda showed AI chatbots repeating false narratives 33 percent of the time, underscoring the need for robust data integrity in AI systems.
  • AI-created malware and data mining sees cybercriminals harness AI to craft and optimize malware, automate DDoS campaigns, and refine stolen credentials. Services like Gabbers Shop use AI to validate and clean stolen data, enhancing its resale value and targeting efficiency.
  • Weaponization and hijacking of AI models using techniques from stolen LLM accounts to custom-built Dark LLMs like FraudGPT and WormGPT, attackers are bypassing safety mechanisms and commercializing AI as a tool for hacking and fraud on the dark web.

The report stresses that defenders must now assume AI is embedded within adversarial campaigns. To counter this, organizations need to adopt AI-aware cyber security frameworks.

"In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defenses," adds Finkelstein. "This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

You can download the full report from the Check Point site.

Image credit: PopNukoonrat/Dreamstime.com

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.