Canonical and NVIDIA work to make AI more accessible in the enterprise

Artificial intelligence

In 2018, OpenAI reported that the amount of computing power used in large-scale AI training runs had been doubled every 3.4 months since 2012. Around the same time, the volume of data generated also increased dramatically.

This means traditional, general-purpose enterprise infrastructure can't deliver the required computing power, nor can it support the petabytes of data required to train accurate AI models at this scale. Instead, enterprises need dedicated hardware designed for AI workloads.

Step forward a new collaboration between Canonical and NVIDIA that aims to accelerate at-scale deployments of AI projects and make open-source accessible on effective hardware for AI training.

Charmed Kubeflow is now certified as part of the NVIDIA DGX-Ready Software program. Kubeflow is an open-source, end-to-end MLOps platform that runs on top of Kubernetes. It's designed to automate machine learning workflows, creating a reliable application layer where models can be moved to production.

It comes with a tools bundle that includes KServe and KNative, so inference and serving capabilities are enhanced regardless of the ML framework that is used. Charmed Kubeflow can be used with AI tools and frameworks like NVIDIA Triton Inference Server for model serving to enhance the stack.

NVIDIA DGX systems are purpose-built for enterprise AI use cases. These platforms feature NVIDIA Tensor Core GPUs, which outperform traditional CPUs for machine learning workloads, alongside advanced networking and storage capabilities. In addition, DGX systems include NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which include over 50 frameworks and pre-trained models to accelerate development.

"Canonical has been working closely with NVIDIA to enable companies to run AI at scale easily. Together, we facilitate the development of optimised machine learning models, using AI-specialized infrastructure, with MLOps open source." says Andreea Munteanu, MLOps product manager at Canonical. "Extending this collaboration to the other layers of the stack, to have both other types of hardware, as well as AI tools and frameworks such as NVIDIA Triton Inference Server, will allow developers to benefit from a fully integrated development pipeline."

You can find out more about the DGX-Ready Software program on the NVIDIA site and there will be a joint Canonical/NVIDIA webinar to discuss AI deployments on March 28 at noon ET.

Photo Credit: NicoElNino/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.