Out of the shadows and into the light: Embracing responsible AI practices amid bias and hallucinations
The path to widespread AI is a bumpy one. While its potential to enhance consumer experiences and streamline business operations through personalization, autonomy, and decentralized reasoning is evident, the technology comes with inherent risks.
AI can produce conclusions that aren’t true, spread misinformation and in some cases, perpetuate existing biases. This -- the darker side of AI’s impact -- can leave business leaders facing financial, legal, and reputational damage.
Lost in the Digital Dream
AI is prone to hallucinations. These are instances where models draw a conclusion that while coherent, are detached from reality or from the input’s context. Similar to a human hallucination, AI can experience a digital mirage, blurring the lines between what’s real and what’s not.
For instance, ChatGPT’s earlier version once responded to a neuroscientist that the Golden Gate Bridge was moved across Egypt in 2016. Such imaginative and unexpected outputs serve as a reminder that AI’s conclusions are not immune to errors or misinterpretations.
But not all AI hallucinations will be this harmless. Businesses that rely on flawed AI outputs may be left with costly errors. In critical sectors like healthcare, this can quickly become a matter of life or death. Imagine a scenario where an AI system is designed to detect heart conditions. If the AI hallucinates a heartbeat irregularity, incorrect diagnosis could occur, resulting in unnecessary medical interventions and endangering patient safety.
Problems also arise when AI models are based on biased datasets. Previously, Amazon used an AI-based recruitment tool to help hire employees. The AI tool had been trained on a data set that reflected historical and societal inequities which may exist even if normalized by removing those factors. Rather than helping to find the best candidates, the AI inadvertently discriminated against female applicants, systematically rejecting their resumes. Far from identifying the best candidates, the tool perpetuated existing inequities. This example highlights not only the reproduction of real-world biases in AI systems, but also the amplification of those biases, potentially worsening future outcomes. The absence of comprehensive regulations around AI can exacerbate these mistakes, leaving businesses without clear directives on prevention.
As a result of hallucinations and biases, businesses can face lawsuits for negligence or discrimination, resulting in fines. Financial losses may arise from erroneous decisions and missed opportunities, while an untrustworthy AI model can drive customers away. These issues could leave organizations deeming AI as too risky, abandoning projects they had planned. In cases where uses of AI would increase productivity, these projects being dropped might leave staff with extra work.
RAG to the rescue
Despite inherent challenges, AI still poses great opportunities for businesses. With the implementation of the right tools, models, and practices, organizations can effectively mitigate any pitfalls and harness the full potential of AI.
Techniques such as retrieval-augmented generation (RAG) help to address issues of bias, fairness, and hallucinations. RAG models incorporate retrieval mechanisms that allow them to access vast amounts of diverse and relevant data. By drawing on a wide range of sources, the models mitigate the effects of bias inherent in smaller or more limited datasets.
RAG models also excel at generating responses that are grounded in verifiable information. This can eliminate AI hallucinations, providing data inputs that are enriched with context-specific information and retrieved from reliable sources. This means RAG models can be fine-tuned for specific tasks or use-cases, benefiting users by providing information that is unique to their situation. This is especially useful for businesses that don’t have the time or budget to retrain AI models with domain-specific datasets.
In the recruiting example above, RAG can help inform the model with newer hiring guidelines, HR policies and equal employment opportunity laws, ensuring a broad and balanced approach for all candidates to receive fair and equal treatment.
Beyond retrieving relevant information, businesses also benefit from RAG models’ ability to generate a natural response. Interactions with these models will generally be more conversational and user friendly. This is particularly useful for customer facing applications.
RAG models should form a key part of a modern data approach. To enhance their effectiveness, RAG models require access to real-time data to ensure that information is as fresh and as accurate as possible. They should also be paired with an operational data store that holds information in high-dimensional mathematical vectors. Models can therefore convert a user’s query into numerical vectors when asked, allowing the AI to respond to relevant text inquiries even if they don’t precisely match the original terms or phrases.
By using real-time data and databases that operate in vectors, AI model outputs remain up-to-date, reducing the risk of outdated information leading to hallucinations.
Heading towards a brighter future
As we embark on the journey towards AI’s widespread adoption, organizations must be aware of the consequences of AI gone wrong. A comprehensive approach to AI is crucial. This paves the way for ethical, fair, and secure AI systems that enhance productivity and personalization without propagating misinformation. By prioritizing responsible AI practices, such as employing RAG models, businesses ensure AI remains a positive force in society, rather than something to be feared.
Rahul Pradhan is Vice President, Product and Strategy at Couchbase.