Why data mobility matters
While digital transformation strategies may have been accelerated by the pandemic, as workers return to offices, organizations around the world are beginning to contemplate moving applications back on-premises. And with financial pressures starting to bite, where and how data is stored is becoming even more important. It’s not purely a financial decision, however, as technical and security requirements need to be considered to maintain performance and protect against cyber threats like ransomware.
As a result, businesses are increasingly re-assessing their cloud strategies and weighing up moving workloads to the cloud, exploring different providers, repatriating back to their data centers, or a mix of all three based on their unique needs. But to be able to make the most of these different options, it's crucial that they pay close attention to their data mobility.
Carried on the winds of change
To understand why mobility matters, it's worth looking back at how we got here.
The pandemic opened the door wide for people to work remotely. Before, when everything was contained within the data center, you only had to worry about that office and perhaps other offices connecting in. As Covid-19 forced everyone into remote locations and created a need for a constant stream of services and data, some 81 percent of companies reportedly accelerated their migration to the cloud, leading to "The Great Relocation."
But when people came back to the office, questions started to be asked. Companies saw their cloud costs increasing and yet many still had their physical servers sat gathering dust. So why not bring the workloads and data back on-site, to how it was before, using the hardware investment that’s already been made? This is "repatriation" and while it's not a route that everyone’s gone down, it's a path well-trodden in the last year or so, for its financial benefits and greater control it gives IT teams. The Veeam Data Protection Trends Report, for example, found that in 2023, the average percentage of servers within businesses that are on-premises increased for the first time in over three years.
However, the other option at this fork in the road is to “double down” on the cloud. Maybe the workforce isn’t coming back to the site, or the company and related services have grown so disseminated that it still doesn’t make sense to have on-premises servers. If businesses had gotten rid of their physical infrastructure during the pandemic, then purchasing and maintaining all of it again isn’t worth it. In this case, companies still want to optimise costs, but they need to do this in the cloud, through re-architecting into more cloud-native solutions such as Platform as a Service (PaaS) or a managed database service where the user does not need to worry about the underlying hardware, operating system, and patches.
Taking back control
While the pandemic meant most companies had to hard pivot to the cloud, now the dust has settled, it's all about having options. IT teams can move and reorganize their infrastructure as they see fit to meet their unique needs and requirements. With many large organizations working with hybrid or multi-cloud strategies, it's not one-size-fits-all, meaning they can select the right home for each workload on a case-by-case basis. But while this is possible, it's not necessarily easy. Many businesses will have found migrating to the cloud for the first time -- even if it was a basic "lift and shift" -- a real undertaking. To make the most of the different options on the table, you need to make sure you can easily move workloads when needed.
One of the first things companies need to get right here is avoiding cloud lock-in. This is easier said than done, due to the many ways an organization can become "locked in". These include: integration with proprietary services and APIs that can be difficult to replicate, vendor-specific skills and knowledge meaning teams don’t have the expertise to work with a different cloud, and the sheer "data gravity" if a company is all-in on a single cloud, making it a huge lift to workloads in bulk. Alternatively, it is possible for IT teams to lock themselves out of other environments and clouds by building architecture that doesn’t work or translate anywhere else. In other words, you can take it out of its current cloud, but it won’t fit in anywhere else.
Once this challenge is overcome, businesses then need to think about how to move data from one environment to another in a safe way that doesn’t result in critical workloads being lost or made temporarily unavailable. The safest way to do this is with application-consistent backups. This way, you’re not affecting anything in production but replicating a clone to migrate with and you can test if things on the new environment are working before taking the old one offline.
The missing "R"
While ensuring mobility does deliver value just in terms of supporting easier migrations (you don’t plan on moving house constantly, but if you do need to, you have to be able to get the furniture out of the building), it can be transformative in other ways. For example, being able to replicate and host workloads and applications can allow teams to set up separate environments for things such as tests and analytics, without slowing the day-to-day application. Essentially, it means businesses can better leverage and unlock the value of the (often huge amounts) of data they have.
There is one final and non-negotiable reason for IT teams to ensure they have adequate data mobility across their environments. When talking about workload migration, it’s hard not to speak in R’s. Conventional wisdom just about agrees on the seven R’s of cloud migration, but when looking at the bigger picture beyond a one-way move to the cloud, we can add a few more to the list. The one that gets ignored, often until it's too late? Recoverability.
In the event of an incident, most organizations have backups in place to restore and recover. This can be on a small scale, like a deleted virtual machine or a patch gone wrong. The alternative to this is a large-scale, site-wide failure or "fire, flood, and blood" scenario or ransomware attack.
According to the Veeam Data Protection Trends Report 2023, 85 percent of businesses suffered at least one such incident in the last year. In these scenarios, having a backup only gets you so far. If the old environment is unavailable, contaminated or even "cordoned off" as a crime scene, you need to recover workloads somewhere new. We see a real mix of where businesses choose to do this with a close to 50/50 split between recovering on-premises and recovering in the cloud, according to the same report. Regardless, you need to be able to easily move that data in a safe and efficient way to reduce downtime as much as possible -- having to arrange a cloud migration against the backdrop of a business-critical outage is not a good time to be learning new lessons, so it pays to be prepared.
In today's world of hybrid cloud, businesses can have more flexibility with where and how they manage workloads, but they need to ensure they have the data mobility to take advantage of this. As organizations reassess and reshuffle their cloud footprint, they need to ensure they can safely move and recover data from one environment to another without any nasty surprises.
Image credit: everythingposs/depositphotos.com
Michael Cade is Global Field CTO Cloud Native Product Strategy, Veeam.