AI governance: Five essential design principles to keep businesses ahead of the curve
In 2023, we witnessed the beginnings of a global AI-driven revolution. With recent studies revealing that one in six UK organizations have already embraced artificial intelligence (AI), these technologies have solidified their position in driving the next wave of digital innovation.
However, until now, organizations have been largely focused on AI experimentation, which has limited the benefits they’ve unlocked. They are now seeking to mature their strategies and embrace AI in a more transformational manner, by embedding these technologies into their core business processes. The launch of solutions like the Open AI GPT Store towards the end of 2023 is set to accelerate this drive for AI maturity, making it easier for organizations to embed ready-built use cases into their operations.
As this process continues and AI becomes more widely adopted, it will be vital for organizations to ensure that safety and regulatory compliance remain front of mind. According to Gartner, two-thirds (66 percent) of organizations are yet to implement tools to mitigate the risks of AI, which highlights a major shortcoming that needs to be addressed in 2024.
Beyond the hype
With AI growth showing no signs of slowing, apprehension around the absence of safety regulations has emerged as a global concern. The UK AI Safety Summit and President Biden’s Executive Order were huge first steps in driving alignment on a global AI risk framework. Backed by more than 25 countries including the US, UK, China and six EU member states, the Bletchley Agreement symbolized a defining moment in the evolution of AI ethics and safety.
This is a promising sign that the world has learned from past mistakes. The international community has consistently missed the mark on global governance policies. For example, efforts to reduce the negative effects of social media have only recently been addressed through the likes of the UK’s Online Safety Bill, which emerged too late. The goals of climate change initiatives such as the Paris Agreement have also slipped away. This time, the international community is ahead of the curve.
It’s worth noting that the emphasis of these agreements isn’t on stifling innovation, but rather to ensure progress can be made without causing harm to institutions and societies. The responsibility is not solely on governance at the highest level either. Businesses and organizations themselves also have a critical role to play in the global efforts towards AI safety. The voluntary commitments that leading tech organizations -- including OpenAI and Meta -- have already made towards transparent development of AI technology, set a shining example for others to follow.
Designing for trust
As they prepare to meet the requirements of any global AI frameworks that emerge in the future, organizations should ensure they’re aligned with five key design principles that will enable them to leverage these transformational technologies while retaining the trust of the global community:
1. Fairness
It is critical to consider the measures required to address the probability of data bias in AI. For example, the large language models (LLMs) that power technologies like ChatGPT are trained on historical data, so regulators have pointed out their potential to fuel discrimination and exclusion in decision-making processes. For instance, the use of this type of AI in loan applications must not build on the subjective and unfair bias that humans may have. This could lead to a customer having their mortgage application denied because the AI identifies a particular pattern in individuals from similar demographic backgrounds defaulting on their repayments. If the AI rejects their application on that basis, it could lead to significant unfairness and potential charges of discrimination against the lender.
2. Explainability
AI is increasingly being used to make decisions that impact individual rights, safety, and core business operations. Therefore, employees should be able to trust that their AI systems can reach the right conclusions and confidently explain to customers or other teams in the business why a decision has been made. This knowledge not only helps to mitigate risk, it also builds trust and adoption. If organizations can’t explain how and why the technology they rely on reaches a particular decision, even the most cutting-edge AI capabilities cannot be used effectively.
3. Reliability
Organizations must ensure that any AI-generated outputs are validated and are therefore reliable. The quality of data is critical to enabling this -- poor quality data generates poor quality outcomes. Organizations therefore need to adopt technologies and processes that enable them to ensure data is cleansed of errors, duplicates, or false positives. In doing so, they can be confident that their AI is making reliable decisions.
4. Adherence
It will be critical for AI models to adhere to existing regulation, such as GDPR, in addition to new AI safety regulations and internal usage policies. This will help to avoid financial penalties and lasting reputational damage. It will also help to ensure that AI is being used in line with the highest ethical standards, driving trust amongst their communities. As a result, it’s essential for organizations to have robust AI governance practices and training policies in place to ensure the technology is being used in line with these requirements.
5. Accountability
Organizations must strike a balance between innovation and accountability. At the core, human users must always take responsibility for any decisions that are made as a result of AI. As more organizations embrace AI systems, it will be vital that they recognize the responsibility to uphold a degree of accountability. This liability should extend throughout the business -- from front-line support staff to management level, so they must be equipped with the tools they need to stay in control.
In 2024, AI will doubtless continue on its revolutionary trajectory, with organizations leading from the front line. As they continue to do so, it will be essential for them to embrace these five key design principles to ensure AI contributes to driving tangible and lasting value.
Arun 'Rak' Ramchandran is President & Global Head -- Consulting & GenAI Practice, Hi-Tech & Professional Services, Hexaware.