Maturing ID wallets, investment for compliance and confidential AI -- privacy predictions for 2026

Privacy text on keyboard button. Internet privacy concept.

Privacy has been increasingly in the news over the last year with plans for government digital IDs and age verification for websites, as well as concerns about how eCommerce businesses and others use personal data.

What can we expect from the privacy landscape in 2026? Some industry experts give us their views.

Ralph Rodriguez, president and chief product officer at Daon, expects government ID wallets to mature. “After years of pilots, 2026 is when digital identity wallets begin scaling globally under formal trust frameworks with enforceable governance, liability rules, accredited labs, and certified conformance programs. In Europe, the European Digital Identity (EUDI) regulation and eIDAS frameworks will anchor wallet adoption in regulated assurance levels and cross-border interoperability, with Member States expected to offer an EU Digital Identity Wallet by the end of 2026. In parallel, Apple Wallet and Google Wallet now support state digital IDs in a growing set of jurisdictions, and TSA accepts mobile driving licenses (mDLs) from participating states. In the US, roughly 41 percent of Americans live in states where mDLs are active and roughly 76 percent live in states with programs live or in development. Adoption remains uneven but is accelerating as airlines, airports, agencies, and high-assurance use cases expand acceptance.”

This echoed by experts at KnowBe4, “While privacy concerns have kept mandatory digital IDs largely at bay, digital identities tied to their real human identities will become far more popular with the rollout of large regional programs such as the EU Digital Identity Wallet, which will be available to all EU citizens in 2026. While these programs are unlikely to be compulsory, they are expected to become increasingly necessary for accessing digital services.”

Fredrik Forslund, VP and GM, International at Blancco, says growing regulation will drive more investment. “In 2026, organizations will continue ramping up investments in data protection, privacy, and compliance to keep pace with an increasingly complex regulatory landscape. Evolving global standards -- such as GDPR, DORA,PIPEDA, and CCPA -- along with industry-specific requirements like HIPAA and FINRA, will push businesses to adopt more structured compliance frameworks, AI-driven monitoring, and best-practice data handling and destruction processes. Growing awareness of the recently revised NIST SP 800-88 (Rev. 2) guidelines -- which includes support for media reuse after proper sanitization -- will help reduce unnecessary risk, cost, and waste. Only companies that proactively manage compliance will reduce risk, avoid penalties, and strengthen customer trust in an era of heightened data accountability.”

Iain Brown, head of data science, Northern Europe at SAS, says, “With the EU AI Act obligations kicking in from August 2026, I expect we’ll see the first wave of headline fines for non-compliance. Boards will suddenly demand provable model lineage, data rights, and oversight as standard. ‘Explainability theatre’ disappears overnight, and synthetic data plus differential privacy become default tools for safe model refreshes.”

Ravi Srivatsav, CEO and co-founder of DataKrypto, believes confidential AI will become essential:

In 2026, enterprises will integrate AI more deeply into core operations, moving beyond experimentation toward scaled, business-critical deployments. This expansion will expose the limits of today’s security measures and accelerate demand for ‘Confidential AI’ -- systems designed with built-in privacy, encryption, and trust guarantees.

Much like the early days of the web, when open protocols gave way to HTTPS and SSL, organizations will shift from simply using AI to securing the full AI lifecycle -- from data ingestion to model training and inference. As breaches targeting AI models and systems increase, companies will adopt proactive protection strategies by embedding privacy, encryption, and integrity controls directly into their AI architectures.

As enterprises advance their AI capabilities, Confidential AI will emerge as the new standard -- embedding privacy and protection into every layer of the AI lifecycle. Through continuous, end-to-end encryption and confidential computing, organizations can train and run models securely, even on sensitive data. In the year ahead, growing demand for zero-trust AI ecosystems will redefine the landscape, making security the hallmark of enterprise AI rather than an afterthought.

Rytis Ulys, head of data and AI at Oxylabs, says AI-native browsers will play a part, “To win the browser wars on privacy, AI models will increasingly run on-device. These ‘Edge AI’ models will read, summarise, and ‘collect’ web content locally into personal knowledge graphs. This trend will decentralize data collection, making it privacy-centric but also creating a new, untraceable form of data extraction.”

Jared Shepard, CEO of BYOD security specialist Hypori, expects a shift to virtualized architecture. “By 2026, we will see a decisive transition away from device-centric mobile security toward mobile virtualization architectures. Organizations will increasingly separate enterprise data and operations from personal devices entirely, delivering secure access without ever placing sensitive data on the endpoint. Compliance will no longer require hardware ownership. Security will no longer require personal intrusion. Privacy will be preserved by architecture, not policy.”

Alix Melchy, VP of AI at Jumio, says companies need to use privacy-preserving tech to combat fraud. “AI agents are making fraud more accessible and personalized than ever before. AI agents are lowering the barrier to executing complex fraud schemes, making it easier for fraudsters to automate attacks. To combat such AI-driven threats, businesses need a multi-layered approach. To address the worsening deepfake crisis, they must implement real-time solutions like multimodal liveness detection combined with rich contextual intelligence. Additionally, to protect user data during verification and build trust, companies should leverage privacy-preserving technologies like zero-knowledge proofs to combat identity fraud. By securely verifying identities without revealing sensitive data, companies can maintain user trust while effectively combating fraud.”

AU10TIX CEO, Yair Tal, sees a rise in the use of variable credentials:

Regulators are demanding two opposing outcomes: protect user privacy by minimizing data collection, yet keep bad actors out with greater certainty than ever. Verifiable Credentials bridge this gap by allowing organizations to receive only the specific confirmation they need -- age, eligibility, authorization -- without exposing unnecessary personal information. But VCs alone are not enough. Even a valid credential can be misused, spoofed, or fronted by a synthetic identity, especially as deepfake generation becomes easy and widespread.

To make VCs trustworthy at scale, they must be supported by a continuous risk layer that can detect anomalies, link credentials to real humans or verified agents, and identify evolving attack patterns.

“Regulators will treat employee privacy as a core compliance requirement, making privacy-preserving security strategies a business necessity,” says David Matalon, CEO and co-founder of Venn. “Companies that embrace this privacy-centric approach will better attract and retain top talent while protecting critical information in a modern, distributed workforce.”

Do you have particular privacy worries around the year to come? Tell us in the comments.

Image credit: md3d/depositphotos.com

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

© 1998-2025 BetaNews, Inc. All Rights Reserved.