How ‘confidential computing’ can add trust to AI [Q&A]

AI PC

Artificial intelligence is finding its way into more and more areas of our lives. But while there are concerns around the use of the technology itself, there are much greater ones over how we secure it.

We spoke to Anand Kashyap, CEO and founder of Fortanix, to discuss confidential computing, a technology backed by Microsoft, Intel, Arm, AMD and Nvidia, which encrypts data while it's being processed, even by AI systems.

BN: In simple terms how does confidential computing work?

AK: Confidential Computing is a privacy enhancing technology that protects data that’s ‘in use’ by performing computation in a physical hardware-based secure enclave -- a protected area of processor memory (both CPUs and GPUs) where code and data run in complete isolation from the rest of the system, which prevents unauthorized access and protects applications and data during processing. With this technology, it is possible to keep data and applications secure even when hackers get physical access to servers, and/or have root passwords.   

BN: What are the technical mechanisms that underpin this?

AK: Confidential computing relies on several key hardware-based technologies designed to protect data while it's in use -- a critical complement to traditional encryption that has protected data at rest and in transit. 
 
At the core are Trusted Execution Environments (TEEs) -- secure, isolated areas of a processor where sensitive data and code can run, protected from the rest of the system, including the operating system and hypervisor. Examples include Intel SGX and TDX enclaves, AMD SEV-SNP for encrypted virtual machines, and Arm’s CCA with its Realm concept. The hardware manufacturers are vital to the development, standardization, and future interoperability of Confidential Computing technology, including Intel, Arm, AMD, and NVIDIA. The other key players are the hyperscale cloud service providers, who are providing the global infrastructure necessary to bring these hardware-level protections to customers at scale, including AWS, Microsoft and Google. 
 
Another critical mechanism is memory encryption. This ensures that data stored in RAM remains encrypted and inaccessible to attackers, even if they gain physical access to the hardware. AMD’s Secure Encrypted Virtualization and Intel’s Total Memory Encryption are notable implementations. 
 
Remote attestation is an essential part of Confidential Computing. It enables a workload running in a TEE to provide cryptographic proof to a remote party that it’s operating in a secure environment, using trusted code and configurations. 
 
Further protections include measured and secure boot processes, which verify the integrity of firmware and software components during startup, and hardware-managed cryptographic keys, which never leave the chip -- ensuring only verified, authorized entities can decrypt or manipulate sensitive data. 
 
Together, these mechanisms enable strong isolation, verification, and encryption of data during processing -- the foundation of confidential computing. These technical underpinnings allow for the development of applications and services which can be leak-proof, breach-proof, and tamper-proof. 

BN: Where is AI’s potential coming into conflict with trust challenges?

AK: AI’s potential is running headfirst into trust challenges because the very data that powers it is also the most sensitive and tightly protected. As organizations look to adopt generative AI (GenAI) to stay competitive, they're simultaneously grappling with serious concerns around data leakage, compliance risks, and exposure to emerging cyberthreats. Many off-the-shelf GenAI tools aren’t seen as viable because they don’t provide adequate control over where data goes or how it’s protected. 
 
At the same time, chief information security officers (CISOs) are facing the growing problem of Shadow AI -- where employees use public AI tools in their daily workflows, often unintentionally uploading proprietary or regulated data. Once that information is entered into a GenAI model outside the organization’s control, it’s virtually impossible to secure. 
 
Ultimately, while data is the engine behind AI, it’s also a major point of vulnerability. Breaches involving sensitive or regulated data -- like PII, source code, or financial models -- can lead to significant operational, legal, and reputational damage. Until organizations can confidently trust that their data is secure during every stage of the AI lifecycle, AI adoption will continue to clash with real-world risk and compliance concerns. 

BN: What steps should businesses be taking to ensure they adopt AI safely and securely?

AK: Securing sensitive company data is a challenging balancing act for CISOs as they weigh the desire for their organizations to take advantage of the perceived value of GenAI while also protecting the sole asset that makes these benefits possible -- their data. 
 
At a high level, CISOs should look at protecting data through its entire life cycle. This includes: 

  • Protecting the data before it is even ingested into the GenAI model 
  • Securing the data assets while they are being used in the GenAI model 
  • Ensuring that the data output is completely secured, as this new data will drive the business outcomes and create true value 

If the data life cycle isn't secure, this becomes a business-critical exposure. 
 
More specifically, a multifaceted approach is necessary to protect sensitive data from being leaked, and though it starts with limiting shadow AI as much as possible, it is just as important to preserve data security and privacy with some basic best practices: 

  • Encryption: Data encryption across its life cycle is vital, but it's equally important to manage and store encryption keys securely and separately from the data itself. 
  • Obfuscation: Use data tokenization to anonymize any sensitive or PII data that could be fed to an LLM. This prevents data that enters the AI pipeline from being corrupted or leaked. 
  • Access: Apply granular, role-based access controls to data so that only authorized users can see and use the data in plain text. 
  • Governance: Commit to ethical business practices, embed data privacy across all operations, and remain current on data privacy regulations. 

BN: Do you see the widespread adoption of confidential computing shaping the landscape of enterprise AI in future? What’s its likely impact on different industries?

AK: We are starting to see the growth in adoption of Confidential Computing across chip vendors, cloud providers, as well as server infrastructure. Once the technology is widely available, and it becomes easier to deploy AI in Confidential Computing, there will be no reason to not use Confidential Computing for sensitive workloads.  
 
The industries which are the most likely to be early adopters and stand the most to gain primarily consist of financial services, healthcare, and public sector. These industries are heavily regulated but their need for using GenAI with their most sensitive data will compel them to start using Confidential Computing as it becomes more ubiquitous and easy to use. 

Image credit: peshkova/depositphotos.com

Why Trust Us



At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.