What's a good, modern backup strategy?
A revolution is in the offing for one of the most time-honored and familiar IT rituals: data backup. As digital businesses continue to multiply the volume of incoming data, the average enterprise backup has reached a petabyte or more in scale. This is pushing conventional storage techniques past their inherent limitations, and certainly beyond sustainable cost.
Backups matter, and CTOs & CIOs know it. Data is central to the success of business, so losing it -- even losing access for hours or days -- affects productivity and profitability. In fact, according to the Gartner 2016 CIO Agenda Report, "Digital business is a reality now, pointing the way to competitive advantage."
Today’s digital businesses need a cost-effective and reliable backup solution that can scale as the business and data grows. Content is growing exponentially today thanks in part to the video and data requirements expected in order to serve the needs of the digital business. Ensuring that data is always available, no matter what disaster occurs, requires a backup solution that understands a company’s scale, integrity and speed requirements.
Scale. Operational efficiency demands a solution that can grow cost-effectively to meet evolving organizational and business requirements. Two major factors play here:
- At petabyte scale, a 20-30 percent more efficient storage system makes a significant difference in cost.
- The flexibility of choosing from among standard hardware options and easy expansion to grow with the business over time, and avoid time-consuming forklift upgrades down the road.
Integrity. Organizations can no longer tolerate lower integrity for backups than for primary systems. Look for high availability and durability.
Speed. At both ends of the process: meet RPOs and lightning fast RTOs -- expect recovery in minutes, not hours or days.
Backup used to be straightforward
These are just a few of the areas companies are working on in terms of backup and recovery:
- Looking for a new solution or new ways to use an existing solution
- Rethinking retention plans
- Intersection of backup and archive strategies
- Leveraging backup as an additional data source for other activities like DevOps and Test/Dev
According to Gartner, by 2018, 50 percent of organizations will augment their current backup application with additional products or replace it with another solution, compared to what they deployed at the beginning of 2015. By 2018, more than 50 percent of enterprise storage customers will consider bids from storage vendors that have been in business for less than five years, up from less than 30 percent today.
By 2020, over 40 percent of organizations will supplant long-term backup with archiving systems -- up from 20 percent in 2015. By 2020, 30 percent of organizations will leverage backup for more than just operational recovery (e.g., disaster recovery, test/development, DevOps, etc.), up from less than 10 percent at the beginning of 2016.
The role of the cloud
Digital businesses are moving more and more to public cloud, private cloud and hosted private cloud models for backup. Gartner also states that by 2019, 30 percent of midsize organizations will leverage public cloud IaaS for backup, up from five percent today. By 2018, the number of enterprises using the cloud as a backup destination will double, up from 11 percent at the beginning of 2016.
Evolution of backup
Technology moves fast. Explosive unstructured data growth is creating major new challenges in data availability and recovery in large distributed data centers that need several hundred terabytes or petabytes of backup storage.
Companies using dedicated backup appliances and NAS as backup targets can be saddled with challenges, including long recovery, limited flexibility and high cost. At petabyte scale, the TCO of both dedicated appliances and NAS becomes prohibitive. Expansion or the ability to add capacity is limited by appliance form factors. Traditional storage devices don’t offer as many data protection options (such as geo replication and erasure coding), therefore efficiency and recovery can be issues as storage capacity and data distribution grows.
Large-scale backups are difficult and complex operations. If disk drives fail -- and let’s face it, a small fraction always will -- the entire system slows down. If your backup drives number in the hundreds, failure-induced downtime is frequent enough to compromise availability and make backup windows difficult to meet.
In addition, hard drive recovery can take days due to the lengthy rebuild time of disks in high density RAID arrays. Another challenge is regularly scheduled downtime for maintenance, to make sure that the system stays up during business hours. With global commerce now taking place 24×7, this facet of traditional storage backup has also become problematic.
Getting ahead of the curve
Clearly, there’s a need for new strategies, which prompts the question: What would an ideal petabyte-scale backup environment look like? To begin with, it would scale out -- rather than up -- quickly and affordably. It would absorb hardware failures without missing a beat or impacting the business. And it would be completely reliable in all other respects.
Though most enterprise data remains file-based, plenty of applications are on the way that store unstructured content -- for example, video on demand, exam screen snapshots for hospitals and labs, Internet of Things (IoT) storage, long-term archiving of voice recordings for regulatory compliance, and video surveillance, just to name a few.
It is therefore imperative that in addition to meeting the aforementioned requirements, the backup infrastructure is future-proofed for these applications later. Object storage in the public, private or hybrid cloud ticks all these boxes and is leading the backup revolution for the age of petabyte scale data.
Paul Speciale, VP of Product Management at Scality.
Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.
Photo credit scyther5 / Shutterstock