Avoiding the big bang data migration

Data migrations are by their very nature high risk initiatives and, if not planned effectively, they can create a significant headache for both the IT team and the organization. That’s because modern technology transfers are massively complex undertakings. However, data migration problems can often be traced back to confusion and disorganization surrounding the migration plan (if one is in place) and a failure to adequately prepare for the move.

Additionally, every company’s data landscape is unique. It may encompass everything from legacy systems to homegrown one-off databases, each with its own level of support. Documentation may be non-existent, institutional knowledge may be limited, and key staff may have left. This makes the task much more complex to undertake.

Other issues IT teams fear, or challenges they come up against, are loss of data, compatibility issues and hardware challenges. In fact, according to analyst firm Gartner, 83 percent of data migrations either fail outright or exceed their allotted budgets and implementation schedules.

Adopting an alternative approach

But do data migrations need to adopt such a 'Big-Bang' approach? Does all the data need to be transferred at once, especially if there is so much inherent risk with migrations? Could a lower risk, iterative and agile alternative be adopted?

There are some key points to consider when moving data. For example, does it make sense to migrate all legacy data and has the organization considered what data will be reused? Likewise, what changes to data are required to accomplish the migration?

At my company we partner with Atlassian, implementing and supporting applications like Jira, Jira Service Management, and Confluence. Therefore, when I talk about data migrations, I am going to do so with an Atlassian focus.

Understanding what data you need to transfer

The quality of the data that an organisation has within Atlassian applications significantly influences the risk and effort associated with migrating to another platform. Complex applications with history and bespoke hidden scripting can be a minefield. To mitigate this risk, it is vital for the IT team to consider how much history they are going to transfer to the new system. Without a thorough understanding of what data they are going to transfer from the source to the target system the negative impact of inaccurate, inconsistent and irrelevant data is amplified.

Therefore, it is important to ensure that data that populates the new system is fit for purpose and delivers quantifiable improvements on the previous system.

Starting afresh with the minimum viable data affords the opportunity to focus on data and workflows that offer the most business value and that deliver efficiency and productivity improvements.

Only move the valuable data

Looking at the big bang data migration approach, this is about moving the entire dataset from the legacy to the target system. This is typically carried out over a weekend and, in order to mitigate as much risk as possible, there are often several test migrations conducted which increase cost and take significant effort to complete. For example, a Jira migration can take up to 30 days of effort and, when considering the risk, cost and effort involved, a more agile, iterative and pragmatic approach makes a lot of sense.

An iterative data migration means that you only move the valuable data, and this is all managed in smaller increments. However, this presents two key challenges: how do you keep both target and source systems data operable until the migration is complete, and how do you coordinate the migration of distinct elements of the business users and functionality without breaking overall business continuity.

Taking a step-by-step approach

For an iterative data migration to succeed, the two systems need to run in tandem for the transition period without impacting each other. Therefore, IT teams should move business units or departments one by one, starting with new teams and projects on the new system and decommissioning old data on the legacy system.

This iterative strategy and migrating just inflight or minimum data allows effort, that would otherwise be spent on migrating a 'Big Bang' approach, to be put toward delivering tangible business value. Often, we find that up to 70 to 80 percent of the data that has not been migrated can be archived or decommissioned.

I know first-hand that there can be extended downtime and significant complexity and effort associated with migrating large volumes of data, not to mention the time and cost to clean up unwanted legacy data. The iterative migration process not only delivers significant benefits at a lower cost, but it also significantly lowers the impact and risk to the business.

Image credit: Sagittarius Production/ Shutterstock

Gary Blower is Solutions Architect, Clearvision. If you are interested in understanding more about iterative Atlassian migrations, download the company’s guide: Avoiding the Big-Bang.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.