Managing AI risk: What are you waiting for?

Risk dial

Recent headlines have brought much-needed attention to questions surrounding algorithmic fairness. Regulators are taking notice. To date, EU officials put forth proposed language for aggressive new AI oversight. The Federal Trade Commission has published governance principles on the responsible use of AI, holding that existing law already empowers U.S. regulators to take corrective action. Additionally, a panel of AI experts testified before Congress about the potential inequities of algorithmic decision systems.

Many argue that increased regulation could stifle innovation, putting some nations at a disadvantage in the global arms race for AI dominance. But if the alternative is to do nothing, we could be creating even bigger risks, threatening our fundamental principles of fairness and equality.

Pay attention, innovators! If we don’t do something now to mitigate the risks of algorithmic systems, machine learning and AI, we’ll inevitably hit an innovation speed bump in the very near future. If we obscure the conversation with talk about "black boxes" and impenetrable neural networks, regulators and the public are likely to push pause on AI innovation.

Perception vs. Transparency

AI has a hype problem. For years, software companies have been making inflated claims about artificial intelligence. Today, it remains a hot topic for corporate leaders -- useful in driving positive public perceptions, attracting and retaining top talent, and garnering coveted speaking slots at industry conferences.

The FTC’s response to all this? "Don’t exaggerate what your AI is doing."

They’re right. Excessive hype creates the perception that AI is much easier than it actually is, and that when left to its own devices, AI will create a better world for all of us.

Yes, AI is already delivering tremendous value, and yes, it shows great promise for the future, but if we don’t put some guardrails in place today, we’ll inevitably see a backlash -- and possibly an overcorrection.

Don’t Let Perfect Be the Enemy of Responsible

Based on all the marketing claims, you’d think most companies are using advanced neural networks and other cutting-edge technology in their AI. In fact, virtually everyone is still operating in first gear, using conventional machine learning approaches to drive consequential decisions that impact people’s lives.

It’s tempting to characterize these systems as "black box" decision-makers that cloak their thought processes behind an impenetrable wall of secrecy. While the mystique is understandable, it isn’t necessarily helpful. If we want AI innovation to succeed and thrive, we have to be willing to get beyond the "black box" characterization. We need transparency and accountability.

There has never been a perfect software application, a product built without faults, or a human who only makes perfect decisions. We should not expect perfection from AI and we should build controls that proactively plan for things going wrong…

Those at the forefront of innovation must also take the lead on voicing and addressing AI risks. We need to move beyond the theories of principles and demonstrate real investment and management of oversights. If we don’t, then the regulators will do so and there is real potential their approaches may over-correct. 

Start Showing Your Work

AI risks stem from people, process, data and technology. Managing those risks requires an approach to reviewing and verifying these systems that connects these elements in a highly intentional way. It calls for a structured process to ensure that systems are well-documented and can be clearly understood by technical and non-technical stakeholders. That business and technical context should be a verifiable evidence trail that begins with project inception and carries through each and every decision and version of models. Only then can independent and objective review take place.

Struggling with internal expertise or outlook on where to start? Good news -- there are many emerging companies and voices eager to help -- if you let them. Engage with consumer advocates. Seek out input from concerned citizens. Create clever ways to proactively identify problems like bug bounty programs. Educate your teams on the importance of doing it right. Explore ways to enable independent, objective validation of your systems. Incorporate use of AI into your corporate social responsibility programs. Show stakeholders that you’re proactively addressing AI risk with meaningful oversight. 

Help Regulators Be Wise

In his annual technology assessment for 2021, Benedict Evans posited that substantial new regulation of the software industry is inevitable. It’s happened with every other major technology since the first industrial revolution. For the software industry, a new wave of regulation is overdue.

Well-founded concerns about AI and ML are likely to accelerate that regulatory impetus and possibly make it more pronounced. If AI innovators act now, they still have the opportunity to prevent that regulatory pendulum from swinging too far past the point of equilibrium.

What can technology leaders do in the here-and-now?

First, be aware of the problem. The risks are real; if we don’t take the lead on responsible innovation, someone else will.

Second, don’t oversell your AI capabilities. Be intentional with your employees, investors and the general public. It’s time to power down the hype factory and get real.

Finally, start putting a multi-stakeholder governance program for your AI in place today. Work with the risk and compliance leaders in your organization. They need to consider how AI systems might negatively impact reputation, legal liability, regulatory compliance, and the bottom line. Enable these non-technical risk partners with visibility, collaborate with them on pilot programs, and work together to seize new opportunities as they emerge.

By proactively building transparency and governance around our AI systems today, innovators have an opportunity to take charge and forge their own path to the future. So, what are you waiting for? Get to it!

Photo Credit: Olivier Le Moal / Shutterstock

Anthony Habayeb is Co-founder and Chief Executive Officer, Monitaur. An AI/ML governance visionary and industry commentator, Anthony is on a mission to unlock the vast potential of intelligent systems to improve people’s lives. By helping organizations understand what they can do today, Anthony is guiding enterprises to build and deploy responsible AI and machine learning models that business leaders, regulators and consumers can trust. See Anthony’s newscast series on YouTube, and connect with him on LinkedIn.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.