What is fueling container and multicloud adoption? [Q&A]

cloud containers

Containers and cloud native applications are transforming how quickly and reliably organizations can bring ideas to market, while also providing the key building blocks for multicloud strategies.

But for all the speed, agility, and elasticity these technologies promise, they also create a complexity tipping point. Across industries, IT staff looking to deliver on the benefits of container and multicloud strategies -- without being overwhelmed by complexity -- are increasingly turning towards workload automation.

To give us more insight on the rise of containers and multicloud adoption, we spoke with senior product marketing manager, Asena Hertz at Turbonomic,  one of the fastest-growing technology companies in the virtualization and cloud management space.

BN: What are the main factors fueling the growth of container and multicloud adoption?

AH: The short answer is 'the business'. Containers supercharge Developer speed and agility. By enabling microservice architectures, Developers can work in parallel on different parts of an application and release more frequently. There's less risk because there are fewer dependencies between application services as well as the computing environment (for example, moving from Dev to Test to Production). The same portability that accelerates releases is also one of the key building blocks for a multicloud strategy.

We recently released a 2019 State of Multicloud report, which found that 83 percent of respondents believed that workloads will someday move freely across clouds and the primary driver would be to leverage best-of-breed cloud services. Clouds today are not just infrastructure-as-a service (IaaS), but providers of application and business services. These services are their true differentiation and will increasingly become their competitive advantage.

BN: What is the overall purpose of a container platform?

AH: A container platform helps run applications at the desired level of service for the business.

As people become familiar with these platforms (like Kubernetes) they're realizing that, while they provide software-defined mechanisms to manage containers, they don’t make systems automatic by default. While one can automate a lot of things in these platforms, that automation is relying on threshold-based policies, which involves a person defining what those policies should be -- which is, by definition, reactive. By the way, that 'self-healing' that everyone talks about, it's based on a threshold that a person set and it can still impact the end user experience.

Even for Kubernetes, resource decisions are 'best guess' allocations, based on how a Developer expects their application services to behave. It takes work to set them up, whether it’s container requests and limits or Horizontal Pod Autoscaling policies, just as two examples. For these highly dynamic environments, services should be continuously checked for whether they are in fact getting the right amount of CPU and memory in order to meet their Service Level Objectives (SLOs). It doesn’t scale without software making those resource decisions continuously. Now, what if the workload could make its OWN decisions about what resources it needs to perform -- accounting for resource tradeoffs at every layer of the stack (not just a Kubernetes cluster)? That scales -- and it's automatic.

BN: Do you believe containerized workloads are increasing in popularity? If so, why?

Absolutely. More importantly, the popularity of containers is growing much faster than we've seen with cloud, or virtualization before that. According to our report, 49 percent of respondents expect to have more that 50 percent of their environment running containerized applications within 18 months.

Containerized workloads will become the de facto method for building applications. The only question is how quickly that happens. Gartner predicts that by 2022, more than 75 percent of global organizations will be running containerized applications in production, which is a significant increase from fewer than 30 percent today.

BN: What obstacles do companies running containerized applications in production face? On the flip side, what is driving companies to move towards production?

AH: Complexity and culture are the top challenges for IT today. Containers enable speed, agility, and elasticity at scale. This has huge implications for an organization's ability to bring ideas to market quickly. And, because they can run anywhere, organizations are starting to think seriously about leveraging multiple clouds based on what's best for their applications and their business.

With the scale of services to be managed, the frequency of deployments, the highly dynamic elasticity that's now possible, and the globally distributed cloud offerings, organizations face an incredible amount of complexity. While we're all used to talking about how technology is transforming the business, we can't overlook the people and teams doing the work. The frequency of releases rely on highly collaborative and communicative teams. If there was ever a time where teams have to work together towards one goal, adopting containers and cloud native makes that time now. It’s a bit ironic that while they are enabling this decoupling of application services, for the people, it’s necessitating tightly integrated operations.

BN: What are some common considerations for those who want to expand requirements to run workloads anywhere?

AH: Some common considerations for those who want to expand their requirements to run workloads anywhere include:

  • Architect for elasticity across multiple infrastructure providers, whether different private data centers or cloud service providers. Most organizations are looking to Kubernetes and the different distributions from cloud providers to help them here.
  • Maintain portability of services to avoid lock-in. And don't just think about lock-in in terms of the financial cost of moving your data, but also the operational 'stickiness' lock-in. Every cloud has a different way of doing things, requiring new skills and/or processes to be learned by teams.
  • Look at what services your applications will want to leverage from different providers and consider integration to other services across clouds. Look at leveraging service meshes that manage security, routing and access between services on and off-platforms, and across clouds.
  • Build your platforms and processes to operate at scale -- from the very beginning. Having a strategy to manage performance across multiple clouds, as well as new and legacy environments, is critical to reaping the business benefits that containers and multicloud offer.

Image creditmaninblack/depositphotos.com

© 1998-2019 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.