How AI-enhanced cyberattacks are redefining the modern threat landscape [Q&A]

Despite still being in its infancy, it would be hard to overstate the impact that AI has already had on the cybersecurity landscape.

Not only has AI made it infinitely easier and faster to develop a wide range of traditional attacks -- such as phishing, business email compromise and malware -- it has also opened the door to novel strategies and threats. Worse yet, they allow threat actors to develop significantly more targeted and sophisticated attacks, regardless of their knowledge level or skill.

We spoke to Eyal Benishti, founder and CEO of IRONSCALES, and Dominique Gagnon, VP of managed security services for Concentrix, to learn more about these AI-enhanced threats and discover how their organizations' recently formed partnership is helping businesses prepare their cyber defenses for the AI age.

BN: At a high level, what's the current landscape of AI-enhanced cyber threats to the enterprise? 

EB: Perhaps the most defining characteristic of today’s AI-enhanced threat landscape is its constant evolution and change. Adversaries are continually leveraging AI in a variety of new and creative ways to create attacks that are more sophisticated, persistent, and difficult to detect. Some of the most common ways we're seeing adversaries utilize AI include:

  • Crafting highly convincing phishing emails and messages, making it easier to deceive recipients and bypass traditional security filters.
  • Generating deepfakes that can be used to manipulate communications, impersonate executives, or exploit human trust in unprecedented ways.
  • Developing more sophisticated malware that evades traditional detection and can more precisely exploit vulnerabilities.

In addition to being more sophisticated and difficult to detect, these AI-enhanced capabilities have also led to a significant increase in the overall volume of these cyberattacks, placing an increased burden on security teams and making the threats even more challenging to mitigate. As you might imagine, the financial and reputational losses stemming from these attacks are also on the rise, highlighting the urgent need for enterprises to adopt equally advanced, AI-driven defenses.

BN: You both share a similar point of view about addressing these new threats, but what drew Concentrix and IRONSCALES to partner with one another to tackle them together?

DG: Both organizations continually seek to modernize their security strategies to stay ahead of the ever-evolving threat landscape. Together, IRONSCALES and Concentrix bring unique capabilities and expertise to deliver new levels of scalability, automation and integration, enabling faster and more successful deployments. Customers can strengthen their defenses, proactively address sophisticated phishing attacks, and create a more resilient security posture. This partnership represents a critical step in Concentrix’s ongoing commitment to leverage advanced technology to protect our customers’ operations and data against modern cyber threats. 
    
BN: Many AI-enhanced capabilities were created in the enterprise before they were adapted for criminal use. Why, then, are enterprises struggling when they have AI-enhanced cyber defenses in place? 

DG: While many enterprises have adopted AI-enhanced cyber defenses, it’s important to remember that many organizations still have not. And even for those that have, there are still several challenges hindering their effectiveness:

  • Data Silos and Legacy Systems: AI relies heavily on comprehensive data to detect compromises or threats effectively. However, many organizations continue to operate with siloed, legacy security technologies that aren’t fully integrated. To unlock AI's full potential, these systems must be consolidated into a unified data lake, enabling seamless analysis and response.
  • AI as an Afterthought: In many cases, AI is implemented as a 'bolted-on' solution rather than designed as a fully integrated, central capability. This approach often requires additional investments in infrastructure and integration, creating friction in deployment and reducing effectiveness.
  • Automation Gaps: AI alone cannot handle the rapidly increasing volume of incidents we’ve seen over the past couple of years. Effective cybersecurity also requires effective automation to triage routine tasks and free up human analysts to focus on critical alerts and high-stakes incidents. To stay ahead of emerging threats, we must also develop and implement more proactive, preventative measures.

Addressing these issues requires a shift in mindset and strategy -- one that views AI as a foundational, integrated component of the cybersecurity ecosystem, supported by automation, a robust data infrastructure, strategic policy measures, and comprehensive awareness, testing, and training initiatives.
 
BN: What evidence is there that phishing simulations and other forms of security awareness training are effective when actioned in tandem with AI-powered email security? 

EB: According to the Verizon Data Breach Investigations Report (2024), 68 percent of breaches involve the human element. This alarming statistic underscores the importance of continuous security awareness training (SAT) and phishing simulation testing (PST). These initiatives have been shown to significantly reduce the likelihood of successful phishing attacks by educating employees on identifying and responding to potential threats. In their most recent research on SAT, Fortinet’s 2024 Security Awareness and Training Global Research Report found that 89 percent of organizations report improvements to their security posture after implementing SAT programs.

With that being said, relying solely on human intervention is not enough -- especially as attackers employ increasingly sophisticated techniques. So, while SAT and PST programs are essential, they are insufficient.

This is where AI-powered email security plays a critical role. AI can detect and block malicious emails by identifying incredibly subtle patterns and anomalies that are likely to elude human detection. Ultimately, the goal is to reduce actual employee exposure to attacks to a bare minimum. However, as no system is 100 percent foolproof, employees will always serve as their organizations' last line of defense, which means having an informed, vigilant, and cybersecurity-savvy workforce is imperative.

Together, these elements create a multi-layered approach to cyber defense, where learning and technology complement and reinforce one another, enhancing an organization’s security posture and overall cyber resilience.

BN: What sets deepfake threats apart from other forms of AI-generated threats like a spear phishing email? What defense options do organizations have? 

EB: Deepfake threats represent a uniquely dangerous form of AI-generated attacks due to their ability to manipulate trust at a profoundly personal level. Unlike spear phishing emails, which typically rely on written communication to deceive, deepfakes use realistic audio and video content to impersonate trusted individuals or entities. This makes them particularly effective for social engineering, as they exploit visual and auditory cues that humans are hardwired to trust.

For example, attackers can use deepfakes to impersonate executives during virtual meetings or create fraudulent voice messages that appear to come from high-ranking officials, persuading employees to transfer funds or share sensitive information. The psychological impact of deepfakes also makes them harder to recognize and defend against compared to text-based phishing attacks.

To defend against deepfake threats, organizations should once again adopt a multi-layered approach:

  • AI-Powered Detection Tools: Use advanced AI and machine learning tools specifically designed to detect deepfake content by analyzing inconsistencies in audio and video.
  • Authentication Protocols: Strengthen verification processes for sensitive transactions or communications, such as multi-factor authentication or implementing unique verification codes for audio or video instructions.
  • Employee Training: Expand awareness programs to include recognition of deepfake techniques and emphasize the importance of verifying unusual requests, even if they seem legitimate.

By combining cutting-edge AI defenses with human vigilance and strong authentication protocols, organizations can mitigate the risks posed by deepfake threats while maintaining resilience against other AI-enhanced attacks.

As all the above occurs, security vendors will work overtime to develop new, more sophisticated, and reliable tools for detecting AI-based content -- including synthetic writing, videos, static imagery, and voice duplication -- and AI-enabled attacks.

Copyright 2024 Concentrix Corporation and its subsidiaries. All rights reserved. Concentrix, the Concentrix logo, and all other Concentrix company, product and services names and slogans are trademarks or registered trademarks of Concentrix Corporation and its subsidiaries.

Image credit: sdecoret/depositphotos.com

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.