How cloud security teams should think about AI


According to estimates from Goldman Sachs, generative AI (GenAI) will constitute 10-15 percent of cloud spending by 2030, or a forecasted $200-300 billion (USD). The public cloud serves as the perfect vessel for delivering AI-enabled applications quickly, cost-effectively, and at scale. For organizations looking to profit from AI’s potential, the path effectively travels through the cloud.
For cloud security teams on the ground, however, the impact of AI can seem complicated. Understanding the challenges it presents, and the key capabilities it enables, can help them work smarter and more effectively. This article explores the three ways cloud security teams should think about AI to enhance protections, improve efficiency, and address resource constraints.
1. Apply cloud security best practices to AI services
A decade ago, cloud computing fundamentally transformed how businesses and industries operate, but it also introduced significant security challenges. Cloud provider services often defaulted to insecure settings in an effort to prioritize the ease and speed of cloud development. It wasn’t an oversight on their part, but a conscious decision to make their platforms easier to use and eliminate the effort of configuring services.
Organizations encountered security risks as a result. Many exposed storage buckets to the public Internet, even when very few wanted to make their data accessible to the entire world. Misconfigurations across numerous settings surfaced regularly -- expanding attack surfaces and raising the potential for severe security incidents.
Fast-forward to today, we see the same problem returning with AI: vendors are introducing services that favor ease of development and deployment over security.
Recent research offers evidence. The 2024 State of AI Security Report analyzed AI security risks in cloud services and found numerous examples of misconfigurations affecting most organizations. Among them, nearly every organization in the study (at least 98 percent) has yet to enable encryption for their self-managed keys, increasing the likelihood attackers can exploit exposed data. This finding applied to the three largest cloud providers and their AI services.
To address these risks, cloud security teams must:
- Configure settings with security in mind. Teams must address the default settings that lead to security misconfigurations. This ensures their organizations don’t encounter the same problems the cloud industry first experienced a decade ago.
- Gain and sustain full visibility. Defenders need to gain full inventory of the AI models, packages, data, and risks present in their cloud estate. For example, they need to understand the various AI models deployed in their environment, their statuses, their configurations, what data they can access, and more. They also need visibility that identifies shadow AI, or the unknown and unauthorized use of AI technologies. Only complete visibility creates the conditions for effective cloud security and cloud cost reduction.
- Adapt to early-stage challenges. Practitioners must discover the boundaries of safe and secure innovation so they can effectively protect and enable it. Teams should limit technologies or services that use AI based on what makes sense and after fully accounting for the security implications involved. AI lacks the standardization of cloud computing, leaving room for more security mistakes. Even foundational development considerations for AI remain unsettled -- e.g., how we version models, store data, build pipelines, deploy projects, etc.
Cloud defenders must also prepare for future shifts in cloud and AI security. This includes multi-cloud adoption, which will continue to accelerate as some cloud providers develop more advanced AI capabilities than their peers. Teams should prepare for the possibility of changing cloud providers by adopting tools and approaches that work universally.
2. Recognize that attackers are weaponizing generative AI (GenAI)
We already see that attackers can use generative AI (GenAI) to automate workflows and advanced attack chains, helping commoditize the threat landscape. Sophisticated attacks are no longer reserved for advanced groups but available to less experienced threat actors. Meanwhile, organizations across the spectrum represent viable targets, not just those promising a lucrative return.
GenAI continues to drive this trend in the cloud by reducing the technical requirements and associated costs of advanced attacks. Yet we can also see this development in other industries as well, such as sophisticated spear phishing campaigns.
In the past, a normal user often could identify a spear phishing email from grammatical errors, unconventional phrasing, or other miscues. Today these obvious signs of malice appear less frequently or not at all. Attackers can use GenAI to produce error-free text fluent in the language of their intended target. Similarly, in cloud security, we see threat actors engineering advanced attack chains without possessing the required technical expertise.
To counter this, cloud security teams must:
- Acknowledge the new reality of the threat landscape. Teams should expect to be targeted by attackers regardless of their organization’s size or type. This means basic cloud security measures will no longer suffice, and only advanced and comprehensive safeguards can keep their cloud estate protected.
- Acquire advanced detection capabilities. Defenders need to gain the ability to detect, prioritize, and remediate advanced, multi-step attack chains. This involves neutralizing the often hidden risks that enable attackers to laterally move or escalate privileges. With the help of AI, attackers can easily exploit toxic risk combinations that endanger high-value assets.
The evolving threat landscape demands that cloud defenders reassess their defenses and prepare for more frequent and complex attacks.
3. Leverage AI to enhance performance
As attackers leverage AI to become more effective, cloud defenders must look to do the same. They must fully embrace the capabilities of AI -- including GenAI and machine learning -- to reduce manual work and boost efficacy.
For analysts and cloud defenders, many daily tasks involve repetition they can automate with AI. And automate they should so security teams can focus on high-value tasks, which includes analyzing the potential ramifications of remediation and deciding how to proceed.
Often, the main issue isn’t the fix of a security issue, but the side effects of the fix -- the reason auto-remediation isn’t a silver bullet and only suits cases that don’t require human judgment, such as encrypting an unencrypted cloud asset. For the vast majority of cloud issues, auto-remediation falls considerably short of what we need for effective cloud security -- analysis that accounts for the complex and far-reaching implications of an action.
Simply taking a misconfigured service offline might resolve a security alert but disrupt intended functionality. Effective remediation often involves collaborating with stakeholders to implement precise solutions -- a task suited only for a human.
To leverage AI for enhanced security, cloud defenders should:
- Consult with vendors. Teams should check with vendors to learn about the AI capabilities they offer. They should also recognize that implementing these capabilities will take time and require more than scripting a few API calls.
- Automate where possible. Defenders need to identify all routine and repetitive tasks AI can assist with or automate. This may include a workflow in part or entirety. For example, teams may rely on AI to generate remediation code and instructions for review and further modification. This can help simplify and accelerate remediations.
- Leverage human intelligence for high-value tasks. Practitioners must make the ultimate judgement call for security issues that present complex considerations and implications. Meanwhile, teams shouldn’t rely on AI as a silver bullet and always make sure they double-check its assistance.
By adopting AI thoughtfully, security teams can boost their efficiency and adapt to the increasing demands of cloud and AI security.
Preparing for the future of AI in cloud security
As AI continues to mature, many organizations remain in the experimental phase, testing its capabilities across various use cases. Cloud security teams must prepare for this shift by asking critical questions about the boundaries, data, and processes related to AI services.
Key considerations include:
- Access controls. Teams must ensure AI models only access the right data they need.
- Management and updates. Defenders should develop strategies for maintaining and securing AI innovations as their organization continues to iterate and mature in this area.
- Security as a focal point. Teams should ensure that security represents a focus of AI development, not an afterthought.
AI’s transformative potential depends on secure implementation. By addressing its unique challenges, cloud security teams can empower their organizations to innovate responsibly while safeguarding against emerging threats.
Image Credit: Yuri Arcurs / Dreamstime.com
Avi Shua is Chief Innovation Officer and Co-Founder at Orca Security