Compliance and cybersecurity in the age of AI [Q&A]
Artificial Intelligence is dramatically transforming the business landscape. It streamlines operations, provides critical insights, and empowers businesses to make data-driven decisions efficiently. Through machine learning, predictive analytics, and automation, AI assists in identifying trends, forecasting sales, and streamlining supply chains, leading to increased productivity and improved business outcomes. It isn't, unfortunately, without problems.
We talked to Matt Hillary, Vice President of Security and CISO at Drata, about the issues surrounding AI when it comes to critical security and compliance.
BN: How is AI increasing the ransomware threat and more broadly changing the cybersecurity landscape?
MH: The primary strategies for propagating ransomware continue to rely on social engineering tactics like phishing and exploiting weaknesses in externally accessible systems, such as Virtual Private Network (VPN) endpoints, endpoints with exposed Remote Desktop Protocol (RDP), and application zero-days, among others. Using AI, cyber attackers can now produce highly sophisticated deceptive messages, reducing the typical indicators of phishing attempts and making them more tempting to the unwary user.
Cybercriminals can also use AI to improve different facets of their activities, such as reconnaissance and coding, thereby strengthening their exploitation vector. By harnessing AI, threat actors can efficiently analyze extensive datasets to identify weaknesses in an organization’s external systems and create tailored exploits, whether by exploiting known vulnerabilities or uncovering new ones.
BN: On the flip side, how is AI helping to improve defensive and preventative solutions?
MH: AI-powered systems can analyze vast amounts of data to detect patterns indicative of cyber threats, including malware, phishing attempts, and unusual network activity. These (Large Language Models) LLMs can identify indicators of compromise or other threats more quickly and accurately than traditional or manual review methods, allowing for a swifter response and mitigation.
AI models can also review activities to learn the normal behavior of users and systems within a network, enabling them to detect deviations that may indicate a security incident. This approach is particularly effective in identifying insider threats and sophisticated attacks that evade traditional signature-based detection methods.
BN: What are the benefits of AI for automating governance and compliance with evolving regulations and industry standards?
MH: AI tools can be fed log data in order to monitor systems continuously, detecting anomalies and responding to indicators of a security incident or misconfiguration or process activity that could result in non-compliance. By staying abreast of evolving governance regulations in real time, these tools help organizations remain current and compliant at all times.
AI algorithms can also analyze vast amounts of regulatory data, reducing the risk of human error associated with manual efforts. This leads to more accurate assessments of compliance status and reduces the likelihood of regulatory violations.
BN: What are other practical or best practices that leaders should take today to protect their company from evolving AI threats?
MH: My suggestions would be:
- Provide comprehensive education to cyber security teams regarding methods to effectively secure AI used by employees and AI being built into or already operating in their platforms or systems, with even the most technically adept teams exploring not only the application but also the underlying technology fueling AI capabilities.
- Deploy phishing-resistant authentication methods to protect organizations from phishing attacks targeting authentication tokens used for accessing environments.
- Establish policies, training, and automated mechanisms to equip team members with the knowledge to defend against social engineering attacks.
- Consistently strengthen the organization’s internet-facing perimeters and internal networks to diminish the effectiveness of such attacks.
BN: What are the ethical considerations when it comes to AI? What are the practical safety actions leaders should take to ensure ethical AI usage across the organization?
MH: Companies should establish governance structures and processes to oversee AI development, deployment, and usage. This includes appointing individuals or committees responsible for monitoring ethical compliance and ensuring alignment with organizational values. These governance structures should be extensively documented and understood organization-wide.
At the same time, promote transparency by documenting AI algorithms, data sources, and decision-making processes. Ensure that stakeholders understand how AI systems make decisions and the potential impacts on individuals and society.
At Drata we have developed responsible AI principles across systems and processes, designed to encourage robust, trusted, ethical governance while maintaining a strong security posture.
- Privacy by design: using anonymized datasets to safeguard privacy with strict access control and encryption protocols, alongside synthetic data generation to simulate compliance scenarios.
- Fairness and inclusivity: removing inherent biases through detailed curation, with continuous monitoring of models to ensure no unfair outcomes as well as intuitive interfaces that work for all users.
- Safety and reliability: rigorous testing, combined with 360 degree human oversight, provides total visibility, ensuring users can be fully confident that AI solutions will operate as expected.
BN: What does the future hold for AI threats?
MH: With the increasing accessibility and potency of AI, it's inevitable that malicious actors will exploit it to orchestrate highly targeted, automated, and elusive cyberattacks spanning multiple domains. Cyberattacks will evolve in real time, enabling them to evade traditional detection methods.
At the same time, the rise of AI-generated deep-fakes and misinformation has the potential to threaten individuals, organizations and the democratic process. Fake visuals, audio and text are making it near impossible to tell the difference between fact and fiction.
BN: What is the future for advanced AI-driven security solutions to bolster cyber-defense capabilities, as well as managing third-party vendor risks?
MH: AI will bolster cybersecurity resilience by employing proactive threat intelligence, predictive analytics, and adaptive security controls. Using AI to foresee and adjust to emerging threats will enable organizations to maintain a proactive stance against cyber criminals, mitigating the impact of attacks. Continuous research and collaborative efforts are vital to ensuring that AI continues to serve as a positive force in combating cyber threats.
Third-party risk is a critical element of a strong governance, risk, and compliance (GRC) program, especially when addressing AI-powered vulnerabilities. Security teams need a comprehensive tool for identifying, assessing, and continuously monitoring risks and integrating them with internal risk profiles. This holistic approach ensures a unified, clear view of potential exposures across the entire organization to effectively and efficiently manage third-party risks when it comes to AI.
Image Credit: Wrightstudio / Dreamstime.com