The battle for control of cloud environments [Q&A]

Cloud

Lack of control has long been a pain point for developers in cloud computing, especially with the market dominated -- and controlled -- by a few large providers.

What's likely to happen as developers' desire for control meets the hyperscalers' desire to keep companies locked into their platforms? We spoke to Billy Thompson, solutions engineering manager at Akamai, to find out.

BN: How can developers benefit from greater agency over their cloud environments?

BT: Having control over cloud architectures enables developers to have more flexibility and agility in their development processes -- something that’s becoming critical for enterprise developers building modern, cloud-native applications. Greater agency allows them to more easily experiment with and implement new technologies, quickly iterate on deployments, and release updates and new features more rapidly. However, third-party cloud vendors, especially legacy cloud providers, tend to restrict this control, leaving developers frustrated.

Think about the agile approach to software development. One of the key benefits and characteristics is adaptability. Being able to more rapidly implement change, comes with more control and flexibility over the outcomes, which then results in better quality code. The same holds true for the underlying architecture that applications run on. A flexible architecture is critical to building resilient, adaptable applications. Workload optimization -- or choosing what applications are best suited for which environments -- is a critical function that developers need to be in control of to get the most out of the cloud, both in terms of cost efficiency and in performance.

Another benefit that developers and organizations experience with greater agency over cloud environments is reduced cloud costs. When developers have control, they have a bigger seat at the table to decide which clouds are best. They can also advise stakeholders on areas to save costs, which is especially important at a time of rising cloud costs and reduced budgets.

BN: Hyperscale lock-in is not a new problem. Why are companies and developers more concerned about it now than ever before?

BT: The status quo for cloud deployment is no longer acceptable in today’s environment of rising cloud costs and economic uncertainty. Macroeconomic variables from supply chain uncertainty to a tumultuous global economy have led to increased scrutiny on the cost of cloud computing. This increased focus on the rising costs of the cloud is unlikely to go away anytime soon and organizations are under pressure to examine cloud sprawl and to find a way to reduce bloated budgets.

While going through the reexamination process, both developers and ops teams are looking to fix past mistakes and build more efficient cloud environments. Centralized cloud provider lock-in can make this a difficult process, as it gives providers more control than developers.

When organizations are locked-in to a single cloud provider, they lose adaptability, leaving them at the mercy of the provider they’ve hitched their wagon to. They risk falling behind as the industry around them accelerates.

BN: What types of workloads or applications are well suited to distributed cloud environments compared to the legacy, centralized CSP model?

BT: Distributed cloud environments are well suited for workloads and applications that require large amounts of data with low downtime. They are particularly useful for applications that have high performance requirements while simultaneously spanning multiple geographic locations, devices, and platforms. Some specific use cases for distributed cloud environments include:

  • Big Data Analytics: Distributed cloud environments can be used to process and analyze large data sets in real-time, providing better insights and faster decision-making.
  • IoT Applications: With the increasing number of IoT devices, distributed cloud architectures can help process and store the data generated by these devices in a more efficient and scalable way.
  • Gaming and Video Streaming: Centralized cloud environments are often insufficient for gaming or high-quality streaming in terms of both cost effectiveness and performance required. The distributed cloud can provide low-latency, high-bandwidth access to gaming and video streaming applications.
  • Machine Learning and AI: Distributed cloud environments can be used to train and deploy machine learning models, allowing organizations to process large data sets and create more accurate models.
  • Microservices Architecture: Distributed cloud environments are well suited to the microservices architecture, which allows for the development and deployment of independent services that can be scaled up or down as required.

BN: How will the changing nature of cloud and edge computing impact developer priorities?

BT: Ideally, developers will have visibility into cloud and edge environments to ensure that the two are working together to build applications that run seamlessly across multiple devices and platforms with the best performance possible.

In the future, the edge and the cloud are only going to be more connected. Providers need to lay groundwork now to serve the entire continuum of compute, from the cloud out to the edge. This will require a new level of scale, delivery, and integration between cloud platforms and edge devices. Organizations need to build with a distributed architecture in mind to create the future type of model where cloud and edge are heading.

As the cloud and edge become more entwined, developers can capitalize on the benefits of edge computing within their cloud environments. For example, edge computing helps applications run close to end-users for reduced latency and improved performance. Developers will need to keep performance optimization in mind when building and deploying at the edge, and also consider how these functions communicate with the cloud. Organizations must also keep security in focus with cloud-to-edge strategies. Existing models and best practices such as zero trust and micro segmentation continue to echo their importance as workloads span beyond central data centers, and as such, must continue to be integrated through all stages of designing and building distributed applications.

As cloud and edge computing evolves, the way that developers design and build applications will also evolve. It will be up to cloud service providers to evolve and enable developers in this journey, by giving them the tools and resources necessary to succeed in building the next generation of applications that blend cloud and edge capabilities.

BN: Developers already have access to multicloud environments. What more needs to be done to ensure they're getting the control they need to best build applications?

BT: Multicloud environments have existed for a while, but they aren't enough anymore on their own. Developers, organizations, and, most importantly, cloud providers, must embrace cloud agnostic mindsets. This means accepting that developers will move applications from one cloud to another based on workloads, costs, and organizational changes.

Even with multicloud strategies, developers may find that they're not getting the adaptability they need. Developer teams may find that the opinionated, prescriptive guidance of legacy cloud service providers aren’t as enthusiastic about adopting multicloud and cloud agnostic mindsets, so it may take exploring some of the modern solutions put forth by cloud providers that emphasize portability and open source, to give developers the environment they really need to thrive.

Photo Credit: Stokkete/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.