Why service mesh adopters are moving from Istio to Linkerd [Q&A]

Service mesh, in case you're unfamiliar with it, is an infrastructure layer that facilitates service-to-service communications between services or microservices via the use of a proxy.

This offers a number of benefits including secure connections and visibility of communications. The two main competitors in the service mesh space are Istio and Linkerd, and the market has recently seen a shift towards the latter. We spoke to William Morgan, CEO of Buoyant and co-creator of Linkerd, to find out more about service mesh and why the shift is happening.

BN: Who needs a service mesh, and why?

WM: Anyone who is building critical software that their business relies on, if that software is built as microservices and if it has a real-time component -- a website, an API, some kind of transactional system -- they need a service mesh. Why? Because the features that the service mesh provides around reliability, security, and observability, are absolutely critical for these applications, and because the service mesh gives you these features in a package that is, or can be, in many situations, hugely better than the alternative of doing it all by hand in the application code.

BN: What are the challenges users face when adopting a service mesh?

WM: Complexity and 'day two' operational cost are by far the most significant challenges. While the service mesh provides a ton of value, like any software, this comes at a cost. And like any infrastructure software, the real significant, unavoidable cost for an organization is the operational cost. Someone is going to have to run the service mesh: monitor it, respond to issues and alerts, upgrade it, debug it when things go south, and so on. Depending on which service mesh you're using, those duties could range anywhere from 'a fraction of someone's job' to 'the full-time work of a dedicated team'.

BN: Linkerd and Istio are the two main service meshes. Why are these two so prominent?

WM: These are prominent because they are the two most widely-adopted service meshes, especially in the Kubernetes space, where the real leading-edge service mesh work is all happening. Linkerd was the first service mesh, and the project to create the category; Istio came later and benefited from a whole lot of marketing dollars.

BN: How does Linkerd differ from Istio?

WM: The two projects couldn't be more different. Istio is a huge project and it tries to solve all problems for all people, regardless of the operational cost. Linkerd instead follows the UNIX philosophy of 'do one thing and do it well', and is extremely disciplined in its focus on simplicity and on lowering the operational burden as much as possible. Those two different philosophies result in dramatically different projects.

Even the underlying implementations are quite different, with Istio relying on the large and complex C++-based Envoy proxy, and Linkerd instead taking a tailored approach with its Rust 'micro-proxies', which avoid many of the CVEs that plague Envoy, not to mention reducing complexity at the data plane level by a dramatic margin.

BN: You've mentioned that you're increasingly seeing Linkerd users switching from Istio. Why is that?

WM: This has been an unexpected but very visible change in the way that users are coming to Linkerd. I think the huge marketing spend around Istio, especially early on, resulted in a lot of people who bought the service mesh value proposition, adopted Istio, and then got burned by its complexity. So now they believe in the service mesh but don't trust Istio, and go looking for something else, and find Linkerd -- which they can get working in minutes, without the team of infrastructure engineers sweating away behind the scenes. It's pretty amazing to watch and the community channels are littered with stories like this.

BN: The Linkerd maintainers recently published benchmarks showing that Linkerd significantly outperforms Istio. What's the reason for that difference?

WM: These benchmarks were remarkable. We showed that Linkerd consumed an order of magnitude less memory and CPU and introduced up to 400 percent less latency than Istio. And we did it in a very open way: we used a third-party benchmark, the code is all there in a repo and our raw data is all there, and anyone who wants to reproduce these experiments can.

This huge difference comes down to one thing: Linkerd's Rust-based 'microproxy' vs Istio's choice of Envoy. Envoy is a great project but it is a general-purpose proxy, which means that it's complex and it is big. Linkerd's Rust micro-proxies, by contrast, solve only the very specific problem of being a 'service mesh sidecar proxy' which means they can be incredibly efficient, which is exactly what these benchmarks show. Using Rust has some other really nice properties, like we can avoid the entire class of CVEs and buffer overflow exploits that are endemic to C and C++ code, and we can ride the wave of the incredible technological investment placed in the Rust ecosystem, which is the focal point of some of the best systems thinking and design happening in the world today. The future of the cloud will not be built in C++.

BN: What does the future look like for Linkerd?

WM: I'm buying an extra pair of shades. Linkerd's been around for a few years, but the last year has been pretty astounding, with 300 percent growth in installs and a huge influx of companies like Entain, the Norwegian Labor and Welfare Administration, and H-E-B. We're working both on adding highly-requested features like policy as well as reducing the operational cost even further with managed services like Buoyant Cloud. The next few years are going to be pretty amazing.

Photo credit: mikser45 / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.