The Hitchhiker's Guide to Ethical AI

In the 1979 cult-fiction novel, The Hitchhiker's Guide to the Galaxy, an enormous super computer named Deep Thought took 7.5 million years to conclude that "The answer to the ultimate question of life, the universe and everything is 42."

He may not have meant it, but what Douglas Adams wrote in 1979 was a portent to why so many questions are being asked about the practical use of AI in business. The recent explosion in AI technology has largely been driven by the availability of large quantities of data, the availability of sufficient computation power and the ever-increasing demand for data analytics to support business strategy. These strategies can be driven by greater efficiency by reducing operating costs or increasing revenue opportunity by improving customer service and product availability.

More widely, the advent of AI capabilities has largely focused in areas that include personal data such as in support of medical diagnosis, patient care and health insurance payments and fraud prevention. As a result, there is a call to arms to legislate and regulate the use of AI where personal confidential data is being used and whether the decision of the AI technology includes unintentional bias towards individuals, or a group of individuals further amplifies the concern and call for regulation. This pathway to closer controls of the use of AI will highlight the need for the technology companies to be able to demonstrate the ability to trace back through the decision-making process. With that in mind, in 2019, should Deep Thought conclude that the meaning of life was indeed 42, the first question we should now ask is "show me the evidence and provide me with the analytical chain of decisions that got you to that answer." Enter the Age of Understandable AI.

AI is an amazingly powerful technology that can support our strategic thinking for generations to come. The learning capabilities can provide organizations with the ability to far more accurately predict outcomes. An old saying suggests that 'wrong or lucky' are the only real outcomes from forecasting tools could now be made irrelevant by the power of AI. But this capability to exploit the availability of mass data comes a great responsibility to not only use the data ethically and responsibility but to also ensure that the outcomes do not contain and an artificial or unnatural bias. Whilst we will undoubtedly be able to rely more and more on the answers provided by AI, simple acceptance that ‘42’ must be right because the computer told them so is also a dangerous path. The organization using the technology has an equal responsibly to ensure predictions are routinely validated and challenged.

Equally, this burden will fall on both the technology provider and the organization using AI to ensure that these large data sets are managed securely and sensitively. Data Synthesis can minimize the use of real data and limit the potential for leaks or misuse. A synthetic data set is created to match the real data set but crucially contains no real actual data. So, for example, the census data from a city could be synthesized to closely resemble the actual demographics of a population but not one individual data set could be traced back to a real individual.  This process can apply to any analysis of personal confidential information, for example, health care insurance, credit card buying trends or credit applications.

When the AI platform is populated with synthetic data it can present accurate, powerful, viable analysis that has completely de-risked the potential for handling errors with the real data. Furthermore, the success of the AI should not be measured by how much data it can use but by how little it needs to make its predications. Any AI should also be receptive and encourage human intervention & correction to provide a sufficiently robust audit trail to satisfy any regulatory or morale oversight demonstrating that the route to the answer was accurate, fair, balanced and free from any intentional or unintentional bias.

When asked in an interview with the Independent (2011) about the origin of why 42, Adams said "It was a joke. I sat on my desk, stared into the garden and thought 42 will do. I typed it out." It is essential for the future credibility and trustworthiness of AI and MIL that answers cannot be perceived to be completely random and made up much like Deep Thought’s final answer. There must be clear accountable trails of evidence to underpin the results. As technology companies climb over each other to release products under the guise of AI they have a moral and ethical obligation to carefully consider how they are using data to represent a vision for the future and fully understand the outcomes and the journey taken to reach those answers.

Alan Cross is Chief Commercial Officer at Diveplane Corporation. He has 20+ years of experience in strategic Software Commercial & Operations Management and a comprehensive background in fiscally sound business unit planning, turnaround/change management, and building employee morale with an intense focus on advising teams and founders on designing and improving business models, strategies, KPIs, financial modelling to ensure sustainable future success.

Photo Credit: VLADGRIN/Shutterstock

6 Responses to The Hitchhiker's Guide to Ethical AI

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.