In our latest user questions posts, we’ve talked a lot about why enterprise businesses should move to the cloud, but not a lot about how.
This week’s submitted question is a great way to start with the “how” aspect of migrating to cloud, but it also deals with a lot of the “why” as well.
This technical professional asks, “What are some best practices to migrate from a physical datacenter to a private, or public cloud offering?”
Bluelock Solutions Architect Jake Robinson tackles this question with three key best practices to ensuring a successful migration of workloads.
“Even before you begin to move your application, there’s a lot of best practice that goes into choosing which application to migrate to the cloud,” explains Robinson.
Regardless of whether you are migrating that app to a public cloud or a private cloud, you should assess the app for data gravity and connectivity of the application.
BEST PRACTICE: Understand the gravity of your data.
Data Gravity is a concept first discussed by Dave McCrory in 2010. It’s the idea that data has weight and the bigger the data is, the harder it is to move. The bigger the data, the more things are going to stick to it.
McCrory states in his original blog post about Data Gravity, “As data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data.”
McCrory goes on to explain that large data can be virtually impossible to move because of latency and throughput issues that develop upon movement. On his website, datagravity.org, McCrory explains to increase the portability of an application it should have a lower data gravity.
Robinson continued this thought, “When moving tier one applications from a physical datacenter to a private or public cloud, we have to take data gravity into account because it will impact the migration."
Robinson explains that as you are talking about migrating an application, you can think of the full stack of compenents as a single VM or a group of VMs that are a vApp (see diagram on right).
“Think of a VM with an OS. If we were to migrate that entire VM to the public cloud, we’re copying anywhere from 8-20 GB of data with that OS, and for what purpose?” Robinson describes. “The cloud you’re migrating the app to might already have the OS available to it.”
Rather than transferring the data for the OS, Robinson recommends whenever possible using metadata instead to describe what OS you want and the configurations, using a template or an image on the public or private cloud side. The same metadata concept can be applied for middleware instances, too.
“What we’re left with is our actual data and what the app is,” he explains. “The app is static and static info is easy to move because you can copy it once. There’s no need to replicate.”
The most difficult part of the migration is the data, however. There’s no easy way to shrink down the data, so you need to evaluate the weight of the data in the app you’re considering migrating.
Especially if you’re a high transaction company, or if it’s a high transaction application, that would be a lot of data to replicate. The data of the app constitutes 99% of the data gravity of the application.
Part of the best practice of understanding the gravity of your application is to understand the ramifications of moving a tier one application with a large amount of data and establish where the best home for that application is.
Watch The Bluelock Blog for Jake's next recommended best practice in Part Two: Understanding the relationship between your applications, coming soon!