How organizations can safely adopt generative AI [Q&A]

Artificial intelligence

Generative AI tools like ChatGPT have been in the news a lot recently. While it offers many benefits it also brings risks which have led to some organizations banning its use by their staff.

However, the pace of development means that this is unlikely to be a viable approach in the long term. We talked to Randy Lariar, practice director of big data, AI and analytics at Optiv, to discover why he believes organizations need to embrace the new technology and shift their focus from preventing its use in the workplace to adopting it safely and securely.

BN: What steps should enterprises take before introducing generative AI tools?

RL: It is important to recognize the transformative potential of AI, and the heightening regulatory and operational risks mean enterprises need to have a plan. Leading organizations are drafting AI policies, governance processes, staffing plans and technology infrastructure to be ready for the surge in demand for AI capabilities and associated risk. Important steps include:

  1. Understand AI: Begin by gaining a comprehensive understanding of AI, specifically generative models, and their implications for your business. This includes grasping potential benefits, risks, and the ways these models are beginning to be incorporated into the technology stack. 
  2. Assess Current Capabilities: Review your existing technological infrastructure and skills base. Identify gaps that could hinder AI implementation or lead to enhanced risks to develop a strategy to address them.
  3. Develop AI Policies: Establish clear enterprise AI policies that define guidelines for its usage and protection within your organization. These guidelines should cover topics like approved use cases, ethics, data handling, privacy, legality, and regulatory impacts of AI-generated content.
  4. Establish Governance Processes: Create governance processes to oversee AI deployment and ensure compliance with internal policies and external regulations.
  5. Plan Resource Allocation: Consider staffing and resourcing plans to support AI integration. This may include hiring AI specialists, engaging with consulting firms, developing staff training, or investing in new technology.
  6. Prepare for Risks: Generative AI can present many unique risks, such as IP leakage, reputational damage, and operational issues. Risk management strategies should be included in all phases of your AI plan.
  7. Manage Data Effectively: Ensure that your data management systems can support AI demands, including data quality, privacy, and security.

BN: How important is it to have monitoring and control procedures in place?

RL: Monitoring is critical for AI because the inner workings of the model are very hard to trace. This makes it very hard to explain precisely what inputs drive AI content creation or decision making. Monitoring and logging of AI inputs and outputs is critical to understand what people are doing with your AI and how it is responding. Monitoring and logging additionally allow you to 'threat hunt' your usage to detect patterns of misuse or risk that can be mitigated through enterprise controls. Monitoring also plays a critical role in model performance optimization and ensuring AI ethics and fairness.

As with any risk, AI introduces a need for controls that can reliably reduce the likelihood of certain bad things happening. This can include traditional cyber and risk controls that harden the entire AI infrastructure and protect from accidental or malicious data loss. It also will need to consider new forms of risk such as 'prompt injection' and AI agents performing autonomous tasks outside of the scope of their design. Strong guardrails are a necessity to enable teams to seize many of the opportunities of AI without exposing the firm to significant new risk.

BN: What is the potential for overlap with other approaches like DevSecOps?

RL: DevSecOps plays a critical role in securing the lifecycle of software development that is an ever-growing important part of most businesses. There are certainly overlapping controls and considerations within the software development world on at least two fronts:

First, AI CoPilot capabilities are showing tremendous value for development teams to accelerate the pace at which they write code. These tools can introduce new vulnerabilities or bad practices – so while they accelerate progress it does not replace the importance of human review of code.

Second, AI CoPilots and human developers are both fallible. Monitoring code for known vulnerabilities, automating the build process to include security by default, and limiting the risk of supply-chain type vulnerabilities in code libraries matters whether you are developing AI models or any other kind of technology.

BN: Why do you need to bring your users along on the journey and how can you ensure this?

RL: Preparing for generative AI in the enterprise is more about people than technology. ChatGPT has done a lot to democratize the capabilities of AI models, which had been available for data scientists and developers for some time. As human language rather than code becomes the interface to interact with AI models, we must make sure that users and stakeholders are a part of the journey. This starts by having clear policies and frameworks in place for what is and is not acceptable usage of various AI tools. Then, it continues with training and expectation setting for all employees to understand the risks and expectations for AI.

Once these items are communicated, users are critical to advanced applications of AI including feedback and training loops, as well as user-centric design. And -- just as with phishing and cybersecurity best practices, humans will be the first line of defense against AI-related risks. We should support and enable them while also preparing strategies for when attackers get through.

BN: The pace of AI development is rapid, how can you avoid being left behind?

RL: It is easy to get caught up in the hype of this space, but it is also important to recognize that change has been a constant in business and IT for a long time. Strategies, frameworks, and plans help to understand the world as it exists today and have a perspective on what actions to take as things change. A strong AI governance function can help to navigate how new AI capabilities are added to the enterprise and what controls need to be in place before they do.

It is also important to encourage innovation and experimentation in risk-managed ways. There's a lot about these techniques that can be learned and tried without making foundational changes to long-standing business processes or creating massive risks. But as with adopting any kind of brand new technology, there are a few guidelines to consider:

  • Only putting in data that you would not mind being shared broadly
  • Keeping humans in the loop of automations using AI
  • Communicating with all stakeholders when AI is involved the creation of content or insights

Lastly, it is better to have some experience and an in-progress plan than wait on the sidelines. Investors and boards are asking for more AI-driven businesses, so organizations should be prepared. If you ignore AI, your competition may be able to deliver superior goods or services at lower prices someday soon. The market forces at work will ensure that AI is top of mind for business leaders for years to come. Embracing the new capabilities, while minding the risks, is the best strategy to avoid being left behind.

Image creditAlienCat/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.