Accepting generative AI into enterprise software development [Q&A]

The development community has been quick to embrace generative AI for its productivity potential. But at the same time it raises concerns for enterprises around IP/ownership, security, compliance and more.

We talked to Steve Martinelli, director of developer advocacy and community at Equinix, to discuss the similarities between GenAI and open-source software's acceptance journey, and how we might tackle the unique challenges it presents.

BN: Why do you think the developer community has so openly embraced generative AI?

SM: The biggest reasons developers have embraced generative AI are ease of use and efficiency. The barrier to entry for getting acquainted with some of these new technologies is low. They can simply input a problem they're working on in normal language and get a solution -- complete with code -- almost immediately. We've seen indicators that, as ChatGPT and CoPilot became widely available, Stack Overflow saw a significant drop in traffic. It was natural for them to respond by releasing their own AI tool, one that provides information on where the code was sourced.

While some developers may still be hesitant to embrace generative AI, the reality is that developers have long been conflicted about whether they should write the functionality they need for themselves or copy it from an open-source library. Most likely, someone has already run into the problem they are encountering and can offer an existing solution. As long as they can verify the documentation of the code and trust the source, utilizing existing code is usually the easiest and most efficient way to get the job done.

Utilizing generative AI tools allows developers to focus on their bigger projects and provide more value to the business. They're able to use AI as a kick starter for bigger projects, letting it draft ideally 85 percent of the code needed for common functions and then working to deliver the last 15 percent themselves.

BN: Can you outline some of the risks that developers may not be considering in their generative AI use?

SM: As these AI tools are so easy to use, they could lead developers to use them as a crutch to lean on, rather than a guide to work with. For example, if a developer typically codes in Java and they need something written in Python, they could ask generative AI to 'translate' the code they've written or draft something entirely new. While this seems helpful and easy enough, if a developer is not familiar with the language in the generated code, they risk not being able to thoroughly review the code and ensure everything is working properly. The output will not make you smarter. You still need to put in the work.

One of the other risks associated with using generative AI to code is trust. Developers have to be able to trust the output, which means knowing what material the AI model was trained on or being able to check the output for bugs or other vulnerabilities. In the same vein, developers and their organizations face the risk of licensing issues -- how was the model trained, whose code was it, are they actually allowed to train the model with that code?

BN: What advice would you offer organizations for overcoming these challenges?

SM: At the very least, your organization should have an AI strategy in place to help prevent any major issues like trusting the wrong AI models or bugs or other vulnerabilities ending up in code. And while we may not encounter any major problems from the use of AI in developer workloads in the short term, I expect that in the medium term -- maybe 18 months from now -- we may start to see the effects of generative AI code development and need to go back and correct it.

When code is used for internal purposes, the scope of threat and risk is pretty self-contained, rather than code developed to be sold in software or services. This raises the question of code developed by AI, what is the license for that code? This brings us again back to open source, which, until recently, tended to have more permissive licenses.

Developers and their managers need to acknowledge that these tools WILL be used. Developers should make it part of their documentation process to be transparent about what code was AI generated, what tool was used, what the prompt was, etc. In the event that issues do arise in the future, this documentation makes it easy to go back and identify where the issue started.

BN: Do you expect to see organizations moving away from public AI offerings and into private environments in 2024 as a result of these challenges?

SM: Yes, to overcome the licensing issue and the security risks associated with public generative AI, organizations are going to be making investments into private AI tools and will be pushing for their employees to use those instead of the public offerings.

While this is good for the organization's reputation (and it'll keep the legal team happy), developers are going to find a way to work around that. The private offerings likely won’t have the same diversity for training the model and the output may not be as good initially.

BN: How is the acceptance of generative AI in enterprise different from, and similar to, the acceptance of open-source software?

SM: Both generative AI and open-source software are major tools in the developer toolbox. The biggest difference is that open source provides accountability and transparency. You know exactly who provided what, when, and where. Generative AI may get to that level of transparency eventually, but for now, it’s quick, easy, and good enough.

A good analogy is to imagine you're baking a cake. Before broad adoption of open source, developers would have to craft a recipe to bake a cake from scratch. They needed to determine where to source ingredients, proper ratios of measurements for those ingredients, methods for mixing, baking times, etc., and finally ending up with a cake. With open-source software, it was like baking a cake with access to a fully stocked grocery store and a recipe in hand -- you know how much of each ingredient you need, and you have all the right steps, but you’re still doing a lot of the work yourself. With generative AI, it's like buying a pre-packaged mug cake mix. All you need to do is add water and pop it in the microwave. Is it cake, and is it available quickly and easily? Yes, but it's not necessarily as good as the cake you would've put more time and effort into.

BN: Do you see infrastructure needs changing based on developers' increased use of generative AI? If so, how?

SM: Infrastructure needs will absolutely change based on developers' use of generative AI, but there are a number of factors that will impact to what extent. It will depend on how they’re deploying their private AI -- if they're using an on-prem solution or not -- where they want data to reside, if they're training models or just running inference. All of these will impact what goes into the infrastructure -- from the chips to the hardware and cooling required to run the workloads.

Image credit: nd3000 / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.