Enterprises replacing data centers with hybrid clouds

Data center

Larry Ellison, founder of Oracle, summed up on the concept of cloud computing very succinctly. "All it is, is a computer attached to a network." Ellison and Oracle have gone on to embrace both open source and cloud technologies including OpenStack, but the basic premise that it starts with a physical server and a network still holds true.

The server industry is going through massive change, driven in the main part by advances in open source software, networking and automation. The days of monolithic on-site server rooms filled with rack-space, and blinking lights and buzzing air-con, are gone. However, the alluring simplicity of this concept is not quite how it works in the real world.

Organizations who want to run a private cloud on premises or a hybrid with public cloud must first master bare metal servers and networking and this is causing a major transition in the data center.

Instead, large organizations are deploying hybrid clouds, running on multiple smaller servers distributed across far-flung sites around the globe. These are being deployed and managed, as demand dictates, by robots, rather than IT administrators.

There is minimal human interaction, as the whole whirligig turns on slick IT automation, designed and directed by an IT technician on a dashboard on a laptop in any physical location.

Suddenly, traditional IT infrastructure is less gargantuan, and less costly, if no less important. But servers remain a part of a bigger solution, residing in software. It is crucial CIOs also make use of their existing hardware to take full advantage of the opportunities the cloud offers, rather than just tearing it out, and ripping it up, and starting again.

It does not make sense to renew their infrastructure, at great expense; such squandering actions hinder progress, ultimately. New architectures and business models are emerging that will streamline the relationship between servers and software, and make cloud environments more affordable to deploy.

What do robots bring to the party?

The automation of data centers to do in minutes what a teams of IT administrators used to do in days does present a challenge to organizations.

Reducing human interaction could be linked to fear of job losses, but instead IT directors and CIOs will find that they can redeploy the workforce to focus on higher value tasks, giving them more time back to interact with their infrastructures and enabling them to extract real value from their cloud architectures.

In addition, automation opens up the field to new and smaller players. Rather than requiring an organization to spend a great deal of time and money on specialist IT consultancy, automation and modeling allows smaller organizations to take advantage of the opportunities offered by cloud, and offer their service more effectively.

For example, imagine you are a pharmaceutical company analyzing medical trial data. Building a Hadoop big data cluster to analyze this data set could have previously taken ten working days. Through software modeling on bare metal, this workload can be reduced to minutes, allowing analysts to do what they need to do more quickly, finding the trends or results from a trial, and bringing new drugs to market faster.

Deployment and Expansion

The emergence of big data, "big software," and the Internet of Things is changing how data center operators design, deploy, and manage their servers and networks. "Big software" is a term we at Canonical have coined to describe a new era of cloud computing. Where applications were once primarily composed of a couple of components on a few machines, modern workloads like Cloud, Big Data and IoT are made up of multiple software components and integration points across thousands of physical and virtual machines.

The traditional practice of delivering scale on a limited number of very large machines is being replaced by a new methodology, where scale is achieved via the deployment of many servers across many environments.

This represents a major shift in how data centers are deployed today, and presents administrators with a more flexible way to drive value to cloud deployments, and also to reduce operational costs. A new era of software (web, Hadoop, Mongodb, ELK, NoSQL) is enabling them to make more of their existing hardware. Indeed, the tools available to CIOs for leveraging bare metal servers are frequently overlooked.

Beyond this, new software and faster networking is starting to allow IT departments to take advantage of new workload benefits from distributed, heterogeneous architectures. But we are at a tipping point, as much of the new server software and technology takes hold, and comes to light.

OpenStack has been established as a public cloud alternative for enterprises wishing to manage their IT operations as a cost-effective private or hybrid cloud environment. Containers have brought new efficiencies and functionality over traditional virtual machine (VM) models, and service modeling brings new flexibility and agility to both enterprises and service providers.

Meanwhile, existing hardware infrastructure can be leveraged to deliver application functionality more effectively. What happens from here, in the next three-to-five years, will determine how end-to-end solutions are architected for the next several decades.

Next Generation Solutions

Presently, each software application has different server demands and resource utilization. Many IT organizations tend to over-build to compensate for peak-load, or else over-provision VMs to ensure enough capacity in the future. The next generation of hardware, using automated server provisioning, will ensure today’s IT professionals don’t have to perform capacity planning in five years’ time.

With the right provisioning tools, they can develop strategies for creating differently configured hardware and cloud archetypes to cover all classes of applications within their current environment and existing IT investments.

This way, it is possible for administrators to make the most of their hardware by having the ability to re-provision systems for the needs of the data center -- for instance, a server being used for transcoding video 20-minutes ago is a Kubernetes worker node now, a Hadoop Mapreduce node later, and something else entirely after that.

These next generation solutions, affording a high degree of automation, bring new tools, efficiencies, and methods for deploying distributed systems in the cloud. The IT sector is in a transition period, between the traditional scale-up models of the past and the scale-out architecture of the future, where solutions are delivered on disparate clouds, servers, and environments simultaneously.

Mark Baker, OpenStack product manager, Canonical.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Photo Credit: Scanrail1/Shutterstock

One Response to Enterprises replacing data centers with hybrid clouds

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.