Why businesses can't go it alone over the EU AI Act

When the European Commission proposed the first EU regulatory framework for AI in April 2021, few would have imagined the speed at which such systems would evolve over the next three years. Indeed, according to the 2024 Stanford AI Index, in the past 12 months alone, chatbots have gone from scoring around 30-40 percent on the Graduate-Level Google-Proof Q&A Benchmark (GPQA) test, to 60 percent. That means chatbots have gone from scoring only marginally better than would be expected by randomly guessing answers, to being nearly as good as the average PhD scholar.

The benefits of such technology are almost limitless, but so are the ethical, practical, and security concerns. The landmark EU AI Act (EUAIA) legislation was adopted in March this year in an effort to overcome these concerns, by ensuring that any systems used in the European Union are safe, transparent, and non-discriminatory. It provides a framework for establishing:

  • Rules to ensure AI technology is used in a responsible and fair manner.
  • Safeguards to prevent individuals from being impacted by decisions made by AI.
  • Mechanisms to verify the decisions made by AI are correct.
  • Avenues to ensure individuals can be held accountable for incorrect decisions made by AI.
  • Approaches for enforcing governance, ethics, and safety requirements of AI.

The regulations apply to any company involved in building or using AI systems, regardless of industry or sector. Given the utility and predicted ubiquity of AI, this means virtually every large organization needs to start looking at how the legislation affects them and what needs to be done to ensure compliance. 

An intelligent regulatory approach

Regulating any new technology is always difficult, much less one that’s evolving at the breakneck speed of AI. It can be tricky even to define what counts as an AI system, never mind parsing which elements need to be regulated and how. Moreover, unlike other tools and technologies, AI systems are highly unpredictable, as the models evolve with each interaction. This means even the engineers who build them often can’t be sure how their AI will behave in any given situation.

Nevertheless, regulation is important and it’s encouraging that so many stakeholders in both the public and private sectors agree on the need for effective guardrails. At the same time, overly burdensome regulations could hold back the development of important innovations. As such, it’s important that AI regulations strike the right balance between mitigating potential harms and abuses, while also enabling companies to reap the rewards, such as improved R&D. Happily, by opting for an outcomes- and use case-based approach to regulation instead of a process-based one, the EUAIA should avoid subjecting companies to overly onerous red tape, while still putting effective safeguards in place. 

A brand new skillset

Yet even well-designed regulations place a number of obligations on businesses, and the EUAIA will require all affected organizations to be a great deal more transparent about their use of AI. By June 2026, organizations will need to have classified and registered their AI models based on the four risk categories outlined by the EU. They must also establish the data governance, risk and quality management, and incident reporting processes needed to ensure the accuracy, robustness, and security of their AI systems. Finally, they must incorporate the tools to maintain human oversight of AI systems, and build out the technical documentation needed to assess and demonstrate compliance. 

Organizations will therefore need to create a number of brand new workflows, as well as re-engineering existing ones. Unfortunately, this is unlikely to be a simple process. New roles, such as chief AI officer, will need to be created and integrated into existing management teams, while functions like content creation and visual design will find that they have new responsibilities that require in-depth training.

An AI ecosystem

Managing such a significant organizational change in mere months is a huge challenge for any organization, especially large enterprises. It is even more difficult given the worsening AI skills gap. The EUAIA hasn’t even come into effect, but demand for AI professionals is far outstripping supply. Indeed, between November 2022 and August 2023, the number of AI-related job roles rose 21-fold. With thousands of organizations potentially having to present metrics, models, and reports in order to demonstrate compliance, the AI skills gap could quickly slip into being a full-blown crisis.

The first step in preparing for the EUAIA is therefore to develop a comprehensive strategy that helps organizations to identify and address their own skills gaps that need to be filled to ensure implementations of AI are effective and compliant. Given that barely one in ten companies have a chief AI officer, the development and execution of this strategy will likely be driven by other technology executives in conjunction with appropriate AI experts. If that subject matter expertise does not exist internally, businesses will need to turn to external partners, to ensure they can avoid poor decision-making, misaligned expectations, and regulatory non-compliance.

The dawn of the AI-powered world

The potential benefits of AI in almost every industry, from medical science and green tech to communications and content creation, are enormous. However, any technology as powerful as AI presents some very serious risks and it is hugely encouraging, therefore, that the EU is moving swiftly to bring in robust but flexible legislation to protect citizens while not stunting innovation.

Meeting these obligations is important, but businesses will need to plan carefully and invest wisely to do so. However, they must also remember they are not alone. The breakneck speed at which AI is evolving means that organizations are going to have to be more collaborative, sharing knowledge and expertise, in order to successfully make the transition. By working with experts who have the insights and knowledge to deliver successful projects in a way that is compliant with the evolving regulatory landscape, businesses can position themselves to be dominant players in the new AI-powered world.

Image Credit: Tanaonte / Dreamstime.com

Arun ‘Rak’ Ramchandran is President & Global Head -- GenAI Consulting & Practice, Hi-Tech & Professional Services, Hexaware.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.