Sticker shock: Managing cloud costs for high availability and high performance

Public cloud services can be affordable for many enterprise applications. But achieving the same service levels that the enterprise data center delivers for high availability and high performance for mission-critical applications can be quite costly. The reason is: high availability and high performance, especially for database applications, both consume more resources and that costs more money -- sometimes considerably more.

Is there a way to make public cloud services equally, if not more, cost-effective than a high availability, high performance private cloud? Yes, but that requires carefully managing how the public cloud services are utilized by each application.

The objective here is not to minimize cost, but to optimize price/performance. In other words, it is appropriate to pay a higher price when provisioning resources for those applications that require higher uptime and throughput. It is also important to note that, for some applications, a hybrid cloud -- with only some resources in a public cloud -- might be the best way to achieve optimal price/performance.

The sticker shock experience demonstrates a need to thoroughly understand how cloud services are priced, along with how -- and by whom -- they are being used. Self-provisioning might be a useful capability to offer users, but without appropriate controls, it can make it too easy to over-utilize resources, some of which are potentially very expensive.

Every cloud service provider (CSP) has its own business and pricing models. But in general, the most expensive resources involve compute and data movement, which together can account for 80 percent or more of the costs. Provisioning virtual machines (VMs) with lots of CPU cores and memory that run constantly with applications that transfer lots of data back and forth is guaranteed to bust the budget.

Provisioning for Optimal Price/Performance

Here some suggestions for managing resource utilization in the cloud in ways that can lower costs while maintaining appropriate service levels for all applications, including those that require high uptime and throughput.

Compute is the most costly resource in the cloud, so use it wisely. For new applications, especially those designed with a microservices model, start with small configurations, adding CPU cores and/or memory as needed to achieve satisfactory performance. Existing VM configurations should all be right-sized, beginning with those that consume the most resources. And give serious consideration to re-architecting those applications that have simply become resource hogs -- a potentially cost-justifiable effort even in the private cloud.

Storage, in direct contrast to compute, is relatively cheap in the cloud. But moving data from the public cloud to the enterprise premises, whether to a private cloud in a datacenter or directly to users, can be quite costly. So keep it in the cloud as much as possible.

Be careful using cheap storage, however, because I/O might be a separate -- and costly -- expense with some services. If so, make use of in-memory databases, solid state storage and/or caching where possible to optimize the use of compute and storage resources.

Public cloud services are often used in a hybrid cloud, and if used prudently, can afford considerable savings over implementing robust high-availability, load-balancing, disaster recovery and other mission-critical provisions purely in a private cloud. Of course, those applications with less demanding uptime and throughput performance needs, which do not require such provisions, might also be good candidates for migration in whole or in part to the public cloud.

Software licenses can be costly in both private and public clouds. For this reason, many organizations are migrating from Windows to Linux, and from SQL Server to other commercial or open source databases. But for those applications that must continue to operate with "premium" software, check various CSPs to see which pricing models might afford some savings.

All CSPs offer discounts, so be sure to capitalize on any that apply. For example, making service commitments, pre-paying for services and relocating applications to a different region can together achieve a savings of up to 50 percent.

Finally, take steps to gain control. An increasing popular way to do so is by using tags. The use of tagging as a means of visibility and control is beyond the scope of this article, but all major CSPs now offer some ability to determine who is using what resources. And that is essential to being able to control costs in the cloud.

The Worst-case Use Case

Mission-critical database applications are usually the most costly use case in the cloud for a variety of reasons. They must run 24x7. They require redundancy, which involves replicating the data and provisioning standby server instances. Data replication requires data movement. On-demand compute resource reservations might be expensive. The equivalent of Windows Server Failover Clustering is not available for Linux. And for SQL Server, the more expensive Enterprise Edition is required to get Always On Availability Groups.

The most versatile and affordable way to provide high availability for any mission-critical application in private, public and hybrid clouds is with a SANless failover cluster. These HA solutions are implemented entirely in software that is purpose-built to create, as implied by the name, a shared-nothing cluster of servers and storage with automatic failover across the LAN and/or WAN to assure high availability at the application level. Most of these solutions provide a combination of real-time block-level data replication, continuous application monitoring, and configurable failover/failback recovery policies.

Some of the more robust SANless failover cluster solutions also offer advanced capabilities, such as ease of configuration and operation, support for the less expensive Standard Edition of SQL Server, WAN optimization to maximize performance and minimize bandwidth utilization, manual switchover of primary and secondary server assignments for planned maintenance, and performing regular backups without disruption to the application.

Following these guidelines should help any organization optimize price/performance for all applications in their hybrid cloud configurations, and by doing so, avoid experiencing sticker shock ever again.

Photo Credit: Rrraum/Shutterstock

About the authors:

David Bermingham is Technical Evangelist at SIOS Technology. He is recognized within the technology community as a high-availability expert and has been honored to be elected a Microsoft MVP for the past 8 years: 6 years as a Cluster MVP and 2 years as a Cloud and Datacenter Management MVP. David holds numerous technical certifications and has more than thirty years of IT experience, including in finance, healthcare and education.

Joey D’Antoni is Principal Consultant at DCAC. With extensive experience in cloud design and architecture, specifically focused on hybrid cloud models, and expert certifications from Microsoft and virtualization software providers, Joey is considered to be a thought leader in cross-platform IT and a frequent speaker at major IT conferences. Joey holds a BS in Computer Information Systems from Louisiana Tech and an MBA from North Carolina State University.

One Response to Sticker shock: Managing cloud costs for high availability and high performance

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.