How machine identity can close a critical AI accountability gap in the EU AI Act

machine learning

European lawmakers are plowing ahead with what could be one of the most important pieces of legislation in a generation. The EU AI Act will take a notably more proactive approach to regulation than current proposals in the US and UK. But experts have spotted a critical loophole introduced in amendments to the legislation that could expose rather than protect citizens and societies from AI risk.

In short, this loophole could undermine the entire purpose of the proposed law and it must be closed. To do this successfully, legislators need to take steps to prioritize machine identities as a way to enhance AI governance, accountability, security and trust. Time is running out.

What went wrong?

It’s heartening to see European lawmakers grapple with what will be some of the biggest technology-related challenges the world has ever faced, as AI becomes embedded ever more pervasively into IT systems.

The lawmakers’ approach has been to create a risk-based framework for AI system. So that those considered an "unacceptable risk" are banned outright, and those classed as "high risk" must pass multiple stages of assessment and registration before approval. Originally, the act contained definitive guardrails. Any AI system would be considered high risk if it was to be used for one of several high risk purposes listed in Annex III of the draft legislation. Developers and deployers of these systems would be required to ensure the tech is safe, free from discriminatory bias, and contains publicly accessible information about how it works.

This has now changed in the most recent draft of the legislation. Thanks to a new loophole, developers themselves will be able to decide if their systems are high risk or not. Over 100 civil society organizations are calling for it to be removed and the original system to be reinstated. They argue that the loophole would allow unscrupulous or unaware developers to circumvent the law’s basic requirements, by self-certifying as "limited" or "minimal risk". It would also lead to legal uncertainty over what is considered high risk, and market fragmentation across the region as different member states interpret what constitutes high risk differently. Local authorities may also struggle to police developer self-assessment effectively.

Reducing risk and enhancing accountability

Put simply, the latest version of the EU AI Act weakens regulatory oversight and developer accountability, undermines trust and security, and could put users at risk. Beyond closing the loophole at the earliest opportunity, lawmakers must go further to reduce risk and enhance accountability in the fast-growing AI industry.

One example of important control is a kill switch. Common in manufacturing, chemical processes, and even petrol stations, a kill switch provides a safe way to stop a dangerous situation getting out of control. For AI, this isn’t a physical switch but a way to stop a model’s use of its machine identity to authenticate.

To be clear, we’re not talking about a single kill switch per AI model. In fact, there may be thousands of machine identities tied to a single model, associated with the inputs that train the model, the model itself and its outputs. These models must be protected at every stage -- both during training and while they’re in use. This means that every machine, during every process, needs an identity to prevent poisoning, manipulation and unauthorized access. This will naturally increase the number of machine identities on networks, and here AI can help as well. Many of these same principles can be applied to stop AIs from being modified or tampered with just like they do today on your smart phone and laptop for apps.

A responsibility to try harder

Members of the European Parliament will have a heavy burden to bear if they fail to close the loophole in the EU AI Act. But this should be just the start. Closer collaboration with industry is required to illuminate the potential value that existing and robust technologies like machine identity can have in cementing AI governance and accountability. Fueling research and development in to safety and security -- like AI kill switches -- will benefit everyone except our adversaries.

Photo credit: Lightspring / Shutterstock

Kevin Bocek is VP of Ecosystem and Community at Venafi.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.