The challenges facing Kubernetes developers and how to overcome them [Q&A]
Developers have a lot to think about in 2022. Security tops the list and, increasingly, developers in the cloud and using Kubernetes need to think about cost too.
We talked to Rob Faraj, co-founder of monitoring tool Kubecost, to find out cultural shifts that organizations and developers need to make to overcome challenges created by the increase in adoption of Kubernetes.
BN: What is the key Kubernetes trend you're seeing as we head into 2022?
RF: Kubernetes has become ubiquitous in enterprise environments, there's no debate there. The 2021 Red Hat State of Kubernetes Security report found that 88 percent of respondents use Kubernetes, and 74 percent are now running it in production. But this massive migration to the cloud and containers is upending traditional IT budgets and P&Ls. CFOs may not have the vocabulary or technical intricacies to understand the complexities of Kubernetes, but they do want to know when and why expenses are ballooning. This has given rise to new processes where FinOps certifications, methods, and tools are being used across the enterprise.
The people who can impact spend the most remain software engineers, who can naturally reduce Kubernetes spending simply by getting insights (i.e. data) into what is being spent and where. This cultural shift of having people outside of finance caring about costs is essential. Sensible cost monitoring processes and accountability can rapidly yield meaningful savings. Making developers more cognizant of the resources they utilize boosts not just cost efficiency but also productivity and security.
BN: What are some of the biggest challenges enterprise developers face right now with Kubernetes?
RF: The industry is quickly addressing two significant challenges with Kubernetes adoption: security and cost. In 2022, we expect substantial progress in both areas to accelerate the push to modernize via containerization.
From a security perspective, that same Red Hat report found that more than half of respondents had actually delayed deploying Kubernetes applications into production due to security concerns. That’s a big chunk. DevSecOps is on the rise as a result, with nearly three-quarters of enterprise respondents ramping up their DevSecOps efforts (25 percent say it is already in advanced stages; 49 percent say it is in early stages).
From a cost perspective, a recent Cloud Native Computing Foundation (CNCF) found that cloud and Kubernetes-related bills are going up across the board -- and quickly. Over the past year, 68 percent of respondents reported that Kubernetes costs increased. Among those whose spend increased, half saw it jump more than 20 percent during the year. Kubernetes overspend has been an easy trap to fall into, given the simplicity of provisioning costly resources such as GPUs and using tools like cluster autoscaling to provision resources programmatically. Particularly as Kubernetes scales in the enterprise (and particularly in multi-tenant environments), seemingly minor oversights and bugs can quickly result in larger-than-necessary bills.
BN: How can enterprises overcome these challenges?
RF: Both with container security and cost reduction, change follows cultural shifts within organizations. In terms of getting control of Kubernetes costs, even small steps for engineers toward monitoring where Kubernetes-related cloud spend is going can have a rather swift and noticeable impact on budgets. With more robust 'showback' or 'chargeback' methods and enforcement mechanisms emphasizing team accountability, enterprises can further optimize infrastructure and realize more significant savings.
Businesses with less advanced environments and two or fewer applications engineering teams can probably get by responding after receiving their cloud bill each month and then addressing any issues contributing to unnecessary costs.
Larger and more complex enterprises tend to use showback, chargeback, or hybrid cost monitoring for reining in Kubernetes expenses. Each of these requires cultural shifts, as developer and engineering teams need to receive access to detailed cost breakdowns, pay Kubernetes and cloud costs out of their own budgets (or only pay for resources that surpass pre-set spending limitations).
BN: How can enterprise developers working in areas such as AI/ML optimize their Kubernetes clusters?
RF: If you're familiar with the growth of AI and machine learning development in recent years, you're likely well-aware of the need to speed up the intensive calculations required for tasks like deep learning. Using GPUs with Kubernetes allows you to extend the scalability of Kubernetes to ML applications. A surprisingly high rate of environments use Kubernetes to test or develop AI models and applications. In fact, a recent survey conducted by Dimensional Insight on behalf of Spectro Cloud found Kubernetes often manages the requirements of one-off use cases such as AI/ML or GPU support (33 percent) and scaling to large environments (24 percent). These use cases face the same challenges all Kubernetes usage: you need good management, insights, and controls to optimize efficiencies and keep costs down.