ChatGPT one year on: Why IT departments are scrambling to keep up

ChatGPT

We’re nearly one year on since ChatGPT burst onto the scene. In a technology world full of hype, this has been truly disruptive and permanently changed the way we work. It has also left IT departments scrambling to keep up – what are the risks of using AI? Can I trust the apps with my data?

Should we ban altogether or wait, and see? But if we ban it, is there a risk of being left behind as other companies innovate?

The 'wild west' of nearly 10,000 generative AI apps on the market

ChatGPT gets the headlines but right now there are 9,600 generative AI apps available promising to help employees do everything from writing annual reports to building code for critical parts of their business. We are seeing this ecosystem grow by around 1,000 apps every month with every reason to expect this to accelerate -- particularly with the OpenAI decision to launch its GPT Store later this month. In addition, existing SaaS apps already in the enterprise are rolling out AI features with a variety of data handling policies.

Right now our concerns should not be solely focused on ChatGPT itself, which is relatively well protected, but the huge array of these third party apps which are not. We’re seeing something of a ‘wild west’ for generative AI at the moment. We’ve assessed the most popular of these 9,400 apps and found that many do not meet basic security standards. If they have a data policy at all then in most cases it's unclear. Firms largely won’t know where their data will be held, how it will be retained, how it will be secured and how it will be used. Some aimed at the accounting profession for example encourage the upload of corporate CSV files to help them compile and write annual reports. Without safeguards in place, firms are at real risk of being in breach of regulations such as GDPR, HIPAA and PCI as well as sacrificing their company secrets to cybercriminals or nation states.

Block or allow AI apps? Either way there is a risk

So what should companies do? The staggering growth of the generative AI ecosystem has meant that 74 percent of firms currently don’t have an AI policy in place. Of those that do, even some of the world’s largest and tech savvy organizations such as Apple have blocked the use of ChatGPT and GitHub Copilot which helps developers write code.

Blocking is an understandable position at the moment since most firms don’t understand the risks well enough and lack the resources or expertise to get on top of the problem. However it’s also an unstainable one since the productivity and innovation gains that firms are making from AI are already becoming clear. Then there’s the risk of ‘shadow AI’ -- even if firms have a policy in place, employees might (deliberately or intentionally) ignore it. Just as we saw with employees using their own devices for work use -- if something makes their lives easier or more productive then it’s likely they’ll be using it before their employer can enforce a policy on them.

The middle ground

There needs to be a middle way whereby organizations can use AI safely. This means having a robust policy in place which trusts the most secure and useful AI apps, but which blocks access to those that aren’t, at least for teams handling the most sensitive data. This is somewhat easier said than done given the wealth of apps that are out there and also the variety of job roles they target. There could for instance be a niche design app which enables a web developer to overcome a series of challenges that had hitherto been difficult to solve which now needs to be considered. Longer term, vetting of this huge app ecosystem can be semi-automated with help from security providers. For now best practice should be to consider the following:

  • What data is the app asking for? Confidential IP or customer data should never be put into an AI app. Ensure employees are reminded of this.
  • What data policy does the application have in place? As discussed above there are key questions regarding how data will be retained, used and protected – steer clear of apps where this is not explicitly stated, and be wary of breaching regulations.
  • How useful is the app to the organization? Security-conscious organizations will want to prioritize access to apps that truly bring business benefits.

Not everything AI-related is necessarily a security risk but what we are seeing right now is a classic case of AI advancing so fast that everyone -- including Governments, regulators, organizations and even AI researchers themselves, scrambling to catch up. It’s vital that we strike the balance between utilizing the powerful steps forward AI has made in the last year without putting companies at risk.

Image credit: [email protected]/depositphotos.com

Alastair Paterson is the CEO and co-founder of Harmonic Security. Prior to this he co-founded and was CEO of the cyber security company Digital Shadows from its inception in 2011 until its acquisition by ReliaQuest/KKR for $160m in July 2022. Alastair led the company to become an international, industry-recognised leader in threat intelligence and digital risk protection.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.