Updated platform helps developer and data science teams use GPUs to embrace AI
Platform-as-a-Service (PaaS) provider Rafay Systems is launching new capabilities for its enterprise PaaS for modern infrastructure to support graphics processing unit- (GPU-) based workloads.
This makes compute resources for AI instantly usale by developers and data scientists but still with the enterprise-grade protections.
In addition to applying its existing capabilities to GPU-based workloads, Rafay has extended its enterprise PaaS with features and capabilities that specifically support GPU workloads and infrastructure. It makes AI-focused compute resources instantly consumable by developers and data scientists, enabling customers to empower every developer and data scientist to accelerate the speed of AI-driven innovation -- and do it within the guidelines and policies set forth by the enterprise.
"I am immensely proud of Team Rafay for having extended our enterprise PaaS offering to now support GPU-based workloads in data centers and in all major public clouds," says Haseeb Budhani, co-founder and CEO of Rafay Systems. "Beyond the multi-cluster matchmaking capabilities and other powerful PaaS features that deliver a self-service compute consumption experience for developers and data scientists, platform teams can also make users more productive with turnkey MLOps and LLMOps capabilities available on the Rafay platform. This announcement makes Rafay a must-have partner for enterprises, as well as GPU and sovereign cloud operators, looking to speed up modern application delivery."
Features include easy to use, self-service experience to request GPU-enabled workspaces; pre-configured workspaces for AI development; and dynamically matching workspaces with available GPUs, along with time slicing and multi-instance GPU sharing to virtualized GPUs.
You can find out more and request a free trial on the Rafay site.
Image credit: jamesteohart/depositphotos.com