Why AI panic in 2023 will yield to AI pragmatism in 2024

2023 rapidly became 'The Year of AI Panic' as governments and the press entered into an AI frenzy.

Progress in Generative AI, spearheaded by GPT4’s release in March, offered users incredible tools with a visible utility and practical benefit. Its impact could be felt across their personal and business lives. From that point there has been a buzz around AI with a snowball effect across the media fueled by sudden engagement from the most senior levels of government across the planet. 2023 has seen the AI train fly across our screens, and the pace of developments from a technical, policy and regulatory perspective has been almost impossible to keep up with. So too has the FUD -- fear, uncertainty and doubt that accompanies disruption.

 AI apparently transitioned from "Science Fiction to Science Reality," as Michelle Donelan said at the UK’s AI Safety Summit dinner in November. But has it… has it really?

The pace of change means that defining AI itself has been hard. The Organisation for Economic Cooperation and Development (OECD) definition was updated in November 2023 and adopted by the EU for its AI Act. In October in preparation for the UK AI Safety Summit the UK defined "frontier AI" as highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models t. It also recognized that today they're built on LLMs (Large Language Models) but in future they may be built on entirely different technologies.

Understanding what AI is has not been the only challenge. The nefariously named OpenAI, the owner of ChatGPT, has nothing to do with open source software or open data or even opening up AI.

OpenAI was structured in its early days as a not for profit to support the planned opening up of its outputs, but quickly shifted to being closed and using proprietary code. However the not for profit structure remained with a board that is not driven by shareholder value. This left Sam Altman vulnerable to a sharp exit and re-entry to the company he founded over a matter of days in mid November. Altman doing the Hokey Kokey emphasized the dependency of OpenAI customers on closed AI with no access to the code or data necessary to access these AI solutions on which they now rely. Users are effectively at the whim of a closed commercial entity for what could become critical infrastructure. Ought that really be what the future of global technology infrastructure is to be built on? Shouldn’t it be transparent enabling trust?

The proponents of existential risk -- the theory that AI significant progress might lead to human extinction or an irreversible global catastrophe -- have been a loud voice and fed the fear of AI. We are likely a very, very long way off the innovation needed that will allow this existential risk to be anything like a reality. Whilst some countries have pushed forward with regulation, the UK has taken a more considered approach and will not regulate until there is greater clarity on what it is regulating. A decision that will likely prove to be wise.

Meta’s LLM Llama was released in February on a license restricted to researchers and leaked in March, enabling many to innovate. LLMs are hugely expensive in terms of resource, power and cost and so had been the missing piece for the open communities engagement prior to this leak. The formal release of Llama 2 in July gave a formal structure to an open LLM alongside the UAE’s open source software Falcom LLM. Of course, in a world of AI panic there has been much scare mongering around the use of open LLMs and far too little understanding of the benefits. Trust created through transparency, access to technology that would otherwise have been in the hands of only a few companies -- none of which are based in the UK or Europe -- and the enabling of competition and innovation that comes from the collaborative open approach to AI.

AI Fatigue

The importance of AI to not just the tech ecosystem, but to society and ensuing political and regulatory interest has led many giving opinions with little understanding of either "AI" or "openness", meaning a key conversation of 2023 is lacking vital knowledge and insight. The tabloid and political hype around AI has led to the year ending with so much AI fatigue.

Understanding is central to risk. Facts must be assessed and analyzed. In turn this risk analysis is the crucial foundation to policy. With so much at stake and so many vested interests there are of course significant risks like regulatory capture or simply keeping key technologies closed which might impact all of our futures. Unlike existential risks these are real and measurable risks today.

Those responsible for today’s decision-making on AI are choosing the future of global technology and society. Their decisions will either democratize the future of technology or lead to it being held by a few companies. Governments who chose the latter will knowingly repeat the mistakes of our recent tech history. Wise governments and policy makers will make a considered choice to open up AI in 2024.

This choice requires risk-based assessment based on knowledge and understanding. In 2023 a few companies shouting loudly brought a version of open source forward to suit their vested interests, with others spreading confusion over what exactly open source is. Neither led to a clear understanding of open source, its values or even its risks.

AI adoption has been accompanied by a recognition of the place of openness in AI, and a recognition that our tech sector and digital infrastructure have been moving to this for the last 30 years. Necessitating understanding of what open source software licensing is, what it requires, the benefits it brings and the nuances of how and why it does so, will enable policymakers to make the appropriate laws. We should remember: laws override licensing and must be made with a clear understanding of their impact.

Governments must learn from our recent tech history. The dominance of a few big players delivering and controlling our public and private sector infrastructures is not a model that enables future innovation or a competitive market.

In 2024 we will see a shift from AI panic to AI pragmatism as governments and policy makers finally engage with the open source communities whose voices have been so absent in the 2023 AI discussions and consultations.

Amanda Brock is CEO of OpenUK, the not-for-profit organization representing the UK’s Open Technology sector

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.