A practical solution to the AI challenge: Why it matters that the AI Safety Institute has embraced open source

To be a world leader in AI, the UK must leverage its position as Europe’s number one in open source software. As the PM said on Friday, open source “creates start-ups” and “communities”. The UK’s open source community has flourished under the UK tech sector radar in recent years. OpenUK’s 2023 report showed 27 percent of UK Tech Sector Gross Value Add was attributable to the business of open source from the UK.

On the back of the AI Safety Summit last November, the UK has not taken the European Union’s route to a legislative solution. We will soon see the outcome of the EU’s gamble in being the first in the world to legislate. The very prescriptive legislation will likely be out of date before it is in use and this may engender regulatory capture in AI innovation in the EU. Few beyond big tech will be able to manage the compliance program necessary to meet the regulation. The risk is obvious and real.

Finding a middle ground

Given these risks, I welcomed the PM’s reaffirmation that he “would not be legislating” on AI and instead is focused on a responsible, pro-innovation approach to the technology. This is typified by the announcement the same day that the UK is open sourcing the testing platform at the UK-founded AI Safety Institute -- a practical (and smart) approach.

As someone who spent 25 years as a lawyer I feel I can say that the answer to most technical challenges, including AI, is not a legal but a practical solution. This is certainly the case in my opinion when you don’t know fully understand the challenges let alone the answers. In one of the most fast paced areas of technology we have encountered, AI, we simply cannot know the answers and detailed regulation would simply repeat the mistakes of history. Rigid legislation in the early 2000s that was difficult to adapt to the pace of innovation led to a situation where only a few tech companies control our digital present.

Why it matters

So why is open sourcing the UK AI Safety Institute Testing Platform such a smart move? 

Firstly, having built the testing platform for Large Language Models (LLMs), the UK is giving itself the opportunity to become the de facto standard in testing LLMs by open sourcing the platform on an MIT, Open Source Initiative (OSI) approved, open source license. If that is successful the ecosystem will grow and be built around the UK platform. To be clear, this is traditional open sourcing of software. It is not in any way about AI being “open sourced” or made open in any way. This is about much more than sharing code on an open source license -- or the legal definition of open source. The practical implications of this move are so much more than meeting that definition.

Open source is in reality about enabling others to use the software platform, to build on it and modify it and, by making it open, to allow engagement with it. By being able to recycle and reuse the code, others wishing to build a testing platform are much more likely to start with the UK one and to not waste time building from scratch. This also enables contributions and collaborative development which should save costs, minimize costly mistakes and build better innovation following the old adage, “many eyeballs make all bugs shallow.”

Secondly, to really work the platform needs to interact with different systems in the AI companies whose LLMs are to be tested and also needs to interface with the LLMs. By being open sourced it is possible for those companies to inspect the code and to build any add on code they require to interact with it, APIs and the rest. So, via its transparency and open source nature it can enable smooth interaction with the AI companies. The AISI has said that it cannot possibly test all platforms but by offering up the platform openly, and giving access to it, organizations can also self-test. 

Thirdly, the UK’s AI Safety Institute has adopted a model that will enable ubiquity and through transparency and trust, create a very real opportunity for the platform to become the global de facto standard for Model testing. This third benefit is of course all the greater for the Memorandum of Understanding signed with the US on April 1 committing to collaborate on platform testing, as well as the sign up of many of the LLM owning organizations who committed to testing after the AI Summit last November. The success of that commitment had been in jeopardy waiting for the platform but this will now shift.

Image credit: BiancoBlue/depositphotos.com

Amanda Brock is CEO of OpenUK, the not-for-profit organization representing the UK’s Open Technology sector

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.