Securing Kubernetes in the enterprise [Q&A]


As more organizations scale up containerized workloads they’re also facing increasing security and compliance challenges.
Kim McMahon part of the leadership team at Sidero Labs to discuss the vulnerabilities enterprises are encountering when scaling up Kubernetes on traditional operating systems and what they can do to counter them.
BN: What security vulnerabilities are enterprises coming up against as they scale Kubernetes on traditional operating systems?
KM: Scaling Kubernetes on traditional operating systems is essentially building cloud-native infrastructure on a foundation that wasn’t designed for it. The biggest vulnerability is the massive attack surface. Traditional OSes have 2,000-3,000 binaries, compared to purpose-built alternatives with as few as 20. That’s a heck of a lot more potential entry points.
SSH access is another issue. While its been the backbone of server management for decades, SSH creates undeniable vulnerabilities in Kubernetes environments. Each and every SSH session can introduce configuration drift and human error, breaking the declarative principles that make Kubernetes so powerful.
We’re also seeing enterprises struggle with outdated package management systems that can introduce inconsistencies during updates. When you’re managing dozens or hundreds of nodes, these inconsistencies can become security nightmares. Additionally, traditional OSes don’t have built-in network-level encryption and sufficient mutual TLS between components, which leaves cluster communications vulnerable. Just the overhead of these systems consumes around 30-40 percent more memory than is necessary, and that directly impacts performance.
The reality is that general-purpose operating systems become Kubernetes bottlenecks at scale. They weren’t built for distributed container workloads, and attempting to retrofit them for Kubernetes leads to security gaps.
BN: As European organizations, in particular, increasingly prioritize data sovereignty, how does their Kubernetes operating system strategy address both technical and regulatory compliance challenges?
KM: There’s definitely a shift right now with European organizations migrating Kubernetes workloads away from the public cloud and to on-prem or hybrid environments to maintain more direct control over their data and infrastructure. This isn’t just a reaction to GDPR or other compliance requirements. It’s a broader strategy to reduce exposure to geopolitical and legal uncertainty, especially when US-based cloud providers are involved.
European tech leaders are adopting Kubernetes-specific operating systems that are optimized for bare metal and designed for secure, consistent operation. The strategy enables them to avoid the risks of SSH-based management, rely instead on API-driven workflows, and get encryption and mTLS baked in. (Just as important, they’re built to play nicely with hybrid cloud setups.)
We’re also seeing smarter data tiering among European businesses, where their critical or regulated data stays on-prem and less sensitive workloads can still take advantage of cloud scale.
BN: API-based management is increasingly replacing traditional interfaces like Bash and SSH in specialized Kubernetes operating systems. How does this architectural shift alter the security posture of containerized environments?
KM: Shifting from SSH to API-based management is one of the most important changes happening in Kubernetes operations right now. SSH might be familiar, but it treats production nodes like personal machines where every manual session is a chance to drift from your intended state. That’s not just inefficient, it’s also risky.
API-based systems flip that model by enforcing consistency from the start. Every change runs through structured workflows, not ad-hoc commands. It’s infrastructure as code that’s applied down to the operating system level. That means no surprises, no silent config drift, and much stronger alignment with Kubernetes’ declarative model.
Without SSH or local user accounts, you shut down entire attack vectors. Everything runs through mTLS, so only authenticated services get to talk to each other. Because everything goes through APIs, you get built-in logs and audit trails. You know exactly what happened and when.
Admins, understandably, sometimes worry about giving up direct access. But think of it as a mindset change, not just a technical shift. You’re managing distributed infrastructure like cloud-native services, not old-school Linux boxes. The payoff is better security, more consistency, and fewer chances for human error to sneak in.
BN: How might the convergence of edge computing and data sovereignty requirements shape the evolution of purpose-built operating systems for Kubernetes over the next two to three years?
KM: Edge computing and data sovereignty are colliding, and it’s speeding up the use purpose-built systems like open source Talos Linux because they solve problems traditional OSes just weren’t designed for.
The edge introduces serious constraints, where you’re dealing with single-node clusters in remote locations, limited bandwidth, and zero on-site support. That’s where minimalist, container-first OSes shine. They deliver the core capabilities you need without the bloat, which matters when every CPU cycle and megabyte of RAM counts.
Data sovereignty requirements will push these systems to include more sophisticated controls for data locality and processing (such as geofencing capabilities). Automated network-level encryption will become standard, with improvements to technologies like WireGuard and KubeSpan to secure distributed cluster communications. We’ll also see deeper integration of security benchmarks like CIS and KSPP guidelines directly into the OS foundation.
The key for any Kubernetes-specialized operating system will continue to make Kubernetes security and compliance ironclad without making Kubernetes harder to use.
BN: With bare metal infrastructure experiencing renewed interest among enterprises running Kubernetes workloads, what practical considerations should IT leaders evaluate when comparing the total cost of ownership between cloud-based and on-premises deployment models?
KM: The cloud’s convenience is real, but so are its hidden costs. Many enterprises are rediscovering that bare metal offers predictable performance, tighter control, and clearer TCO economics for stable Kubernetes workloads. Cloud auto-scaling can sound efficient but introduces complexity, with engineering teams burning countless hours fine-tuning configurations. On bare metal, those cycles are saved, and your workload performs consistently without surprise throttling or cold starts.
Egress fees alone can blow up cloud budgets, especially in data-heavy Kubernetes environments. On-prem infrastructure eliminates them outright. Even ‘wasted’ capacity on bare metal isn’t necessarily waste. With predictable demand, overprovisioning is cheaper than constant optimization work. It’s also more secure, since data doesn’t leave your walls. The winning strategy increasingly combines both models: use cloud for bursty workloads, and run predictable ones on bare metal.
Image credit: serezniy/depositphotos.com