Understanding the risks of integrating GenAI in GRC programs: A framework for compliance teams

NIST's recent AI Risk proposal, AI RMF Generative AI Profile, aims to assist organizations in comprehending AI risks internally and from third-party vendors. While GenAI adoption is on the rise across various sectors, compliance managers are more cautious about incorporating AI into their compliance programs. Despite all the hype about AI, a survey conducted by The Wall Street Journal among approximately 300 compliance professionals revealed that only one-third currently incorporate GenAI within their compliance programs.

Collaborative efforts between entities like NIST and prominent organizations including OpenAI and Microsoft are underway to expedite the development of standards and recommendations for the responsible deployment of AI. Amidst grappling with the implementation of GenAI, it becomes imperative to understand how third parties are integrating this technology to better evaluate corporate risk, consequently enhancing regulatory and compliance reporting.

Incorporating AI

Organizations are faced with the challenge of how to incorporate AI, as they look to streamline processes and save time, but it’s fraught with challenges. As artificial intelligence (AI) permeates workflows in enterprises, federal agencies, and state, local, and education (SLED) organizations, myriads of questions and challenges arise, inducing a sense of uncertainty. Each decision to adopt AI triggers several inquiries: How do we seamlessly integrate it into existing processes? What role does GenAI play in this transition? While the potential for efficiency gains and added value is evident, there are uncertainties.

A significant concern revolves around the reliance on "older information” as the current free version of ChatGPT generally relies on data that is more than two years old. As data ages, its relevance and accuracy diminish, potentially prolonging review times and leading to incorrect conclusions. Despite these hurdles, there's a burgeoning demand from organizations to leverage GenAI. However, many are eager to embrace AI without fully grasping its implications, underscoring the necessity for a comprehensive evaluation of benefits and pitfalls before completely adopting this technology.

Compliance Reporting Challenges

Compliance departments encounter numerous challenges when reporting on AI.  Organizations are still figuring out how their employees are using AI, so there’s likely a lack of updated processes and policies to accommodate AI usage. In addition, compliance teams find it difficult to track AI-related risks and therefore can’t ensure adherence to regulatory requirements. And compliance teams must constantly revisit and update controls to better align with AI integration efforts.

The integration of GenAI presents several hurdles for organizations; a few are listed below:

  1. Risk Assessment and Compliance

Incorporating GenAI raises concerns about risk management and compliance. From older information influencing decisions to the increased review time associated with AI-generated outputs, organizations face the daunting task of ensuring compliance with existing frameworks like NIST RMF (Risk Management Framework) and industry-specific standards such as PCI DSS (Payment Card Industry Data Security Standard) and HIPAA (Health Insurance Portability and Accountability Act).

  1. Third-Party Risk Management

Organizations must not only monitor their internal use of AI but also assess the AI practices of third-party vendors. This involves tracking vendor profiles, evaluating compliance, and ensuring transparency in vendors' AI systems. It’s important for organizations to assess risk to third-party vendors’ AI programs. Organizations should consider setting up a risk scoring assessment to evaluate each vendor’s risk and tier them based on the scoring. This establishes a baseline for organizations and helps compliance teams narrow their focus to vendors with more risk.

In response to these challenges, frameworks such as the NIST AI Framework offer invaluable guidance.

First Steps to Effectively Track GenAI

Organizations need to focus on monitoring vendor practices and ensuring alignment with organizational standards. In addition, organizations should consider using a TPRM (Third-Party Risk Management) solution to streamline vendor assessments and categorize vendors based on risk tiers. And finally, regularly evaluating changes to frameworks and updates, including NIST RMF, can help organizations nimbly adapt to evolving AI technologies and associated risks.

Addressing Compliance Challenges -- here AI Can Help

Compliance departments encounter various hurdles when reporting on AI, including outdated processes and policies, difficulty in tracking AI-related risks, and the need to update controls. Consulting partners like Cential have developed an integration using ChatGPT to leverage AI for risk and compliance processes. And organizations are using AI logic to establish better frameworks. For example, a large retailer is implementing a framework enabling self-service capabilities, leveraging AI logic to provide guidance to users seeking review requests.

The Path Forward

As organizations navigate GenAI integration complexities, articulating clear objectives, prioritizing automation, implementing robust risk management practices, and leveraging purpose-built solutions are imperative. With the right frameworks and tools, organizations can confidently embrace AI's transformative potential while mitigating associated risks, paving the way for seamless integration and enhanced operational efficiency.

Image Credit: Irinayeryomina / Dreamstime.com

Jeff Reid is vice president of sales engineering at Onspring. He has nearly a decade of experience as a governance, risk, and compliance (GRC) strategist, business analyst, architect, developer, and quality assurance analyst, helping Fortune 500 companies implement GRC programs.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.