Embracing AI -- safely

Any CTO or CIO who has been working for even a short period of time likely has had experience with some novel technology phenomenon that seemed to completely revolutionize the industry overnight. In my 25+ years of experience -- most of which have been at technology companies aimed at simplifying financial transactions -- I’ve witnessed several of these disruptive software shifts. Each was discussed upon its debut as though it had world-changing implications… and then pretty quickly became the norm.

AI’s trajectory from esoteric industry chatter to dominant mainstream conversation has been stratospheric. Cloud software, in contrast, was the topic of industry buzz for years before it became nearly ubiquitous (and it still doesn’t have 100 percent adoption among all enterprise technology systems). It seems to me that AI has secured a foothold -- especially in pop culture -- while still in relatively early development.

While all new tech needs to be approached carefully by CTOs and CIOs, I think AI’s quick ascension in tech and culture more broadly necessitates a particular prudence. That said, taking too much time to figure out how to leverage AI could put a company woefully behind the curve. Simply put, my opinion is that if CTOs and CIOs aren’t actively figuring out a way to bring AI into their businesses safely, securely, and legally, they’re on a path to obsolescence. I believe that AI will continue to reshape business in the decades to come; if you don't move now you'll have to play catch-up later.

Prioritizing Data Security

AI can be seen as a sort of racecar: It is incredibly fast and efficient but without fuel it’s pretty useless. For AI, data is fuel. It’s therefore tempting to use AI to analyze huge swaths of company data quickly.

But when it comes to proprietary data, there’s no such thing as too careful. There are several different AI programs and they each work slightly differently; without a very close and thorough comprehension of a particular platform’s terms of service, you can easily get into trouble. The AI platforms on the market are available in open source, free and commercial versions -- a lot of rudimentary experimentation understandably happens with free AI programs. But another thing about free AI platforms is that you can’t be sure whether you’re relinquishing the data you’re using in the interaction without thoroughly reviewing the terms and end-user license agreement (EULA). This will cause issues if the AI retains the data that it’s fed and uses it to continue to "learn."

It’s not hard to imagine a totally well-meaning employee creating a free account on an AI platform and pumping it full of company information with the hopes of streamlining various data analysis processes. But if that employee isn’t absolutely 100 percent sure the company providing the free service is deleting data after analysis, they’re basically giving away potentially sensitive company information. 

This is why it’s vitally important for a CIO to know who has access to what proprietary data and what AI is interacting with company information. Organizations may want to establish an internal policy that limits -- or outright prohibits -- the use of company information/assets in the course of interacting with AI without explicit approval from a cross-functional governance team.

Compliance Considerations (and the Board)

At my company, EngageSmart, we work with two of the most thoroughly regulated industries operating: financial services (payments) and healthcare. This means that any decisions we make around our use of AI will have to comply with a host of strict government regulations. It is imperative that organizations in these and similarly-regulated industries keep compliance as a core focus when designing their AI policies.

Furthermore, EngageSmart is a public company, accountable to a board and shareholders. Before we can make any lasting decisions around AI, we have to determine what shareholders need to know, from a disclosure perspective.  Of course, these are the kinds of questions that will arise with any new tech, but AI’s rapid rise has amped up the urgency of resolving multiple different compliance considerations.

Consider a 'Sandbox'

Again, AI’s potential and potency are already evident, and any business leader is going to want to embrace such a powerful tool. So while it’s imperative to proceed with caution, that prudence shouldn’t stifle innovation.

That’s why some IT leaders have ventured to create secure, contained "sandboxes” for experimentation. These spaces are to be highly controlled with access restricted to a select few. Ideally, they’re created in agreement or accordance with the providers, with terms stated very explicitly (and with the blessing of all legal departments).  Moreover, a trusted cloud partner can help carefully monitor the entire sandbox to prevent anything sensitive from escaping its borders.

Having a contained space for experimentation is not just beneficial in terms of innovation, but in terms of crafting company policy in regards to AI. It’s important that the people responsible for devising and implementing AI policy are not just from the tech side of operations. It’s my opinion that the creators of such a policy be a working group with a representative from legal, one from tech, one from compliance -- and maybe even a rep from HR. Though AI is maturing and becoming more polished at a breathtaking rate, I think we’ll find that the tech is still in its infancy. That is to say, there hasn’t yet been enough time for the potential risks -- and benefits -- of AI to fully bear out.

AI is one of the most exciting technological advancements I’ve encountered in my career, and I am more than optimistic about how it will continue to develop. But despite its meteoric rise and exponential refinement, it behooves everyone -- and especially CTOs and tech leadership -- to take our time and be cautious in how we work to realize its potential. AI can only run with human guidance and data. Let’s be sure to guide it responsibly.

Image Credit: Wayne Williams

Frank Laura, is Chief Technology Officer, EngageSmart.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.