Cloud 101CircleEventsBlog
CAIQ Lite is now accepted into the STAR Registry! Showcase your cloud security readiness with a simplified assessment. Learn more today!

What is Cloud Repatriation?

Published 09/22/2023

What is Cloud Repatriation?

Originally published by Sangfor Technologies.

Written by Nicholas Tay Chee Seng, CTO, Sangfor Cloud.

The Cloud Repatriation Trend in 2023

Browse the pages of most IT tech news websites and chances are you will come across stories of enterprise organizations migrating en masse to the public cloud as part of their cloud adoption strategy. But does the premise of hosting on the public cloud, a service dominated by cloud hyperscalers like AWS, Azure, and GCP, still ring true? By and large, yes. Organizations of all sizes still rely on public cloud infrastructure for application and data hosting to support their mission-critical workloads. However, the idea of going all-in on a public-cloud-only strategy and abandoning the on-premises data center and/or colocation services is slowing down. Organizations now realize the benefits of a hybrid cloud approach, giving rise to more cloud repatriation initiatives. In this article, we will discuss and address the areas relating to the cloud repatriation trend.


What is Cloud Repatriation?

Cloud repatriation is the process of moving data, applications, or workloads from a public cloud environment back to an on-premises infrastructure. It also refers to transferring IT resources from a public cloud environment located outside the country of operations back to a local data center, whether on-premises, a local public cloud, or a colocation facility.


Drivers of Cloud Repatriation

Cloud repatriation has garnered significant attention in recent years. There are several key reasons why organizations are moving workloads back from hyperscaler cloud-based environments to on-premises or local cloud infrastructure. Let’s explore a few of them.

  1. Cost Savings: One of the most compelling reasons for repatriating workloads is the high cost of hyperscaler clouds. Organizations often encounter unexpected financial pitfalls, such as high data transfer (ingress/egress) fees that result in significant "bill shock". In many cases, migrating and hosting the wrong workloads in inappropriate configurations can be more costly in public clouds than on-premises. As enterprises scale, many have reported that it becomes increasingly cost-effective to manage their workloads on-premises, allowing for predictable costs and the prevention of unforeseen expenses like data transfer fees.
  2. Data Security & Sovereignty Requirements: Concerns about data sovereignty, combined with stringent regulatory measures by governments, demand a more controlled environment for hosting and storing customer data. The preference often leans towards in-country hosting to meet compliance and security measures.
  3. Better Performance: Public cloud services, while providing extensive geographic reach, can sometimes introduce performance issues. Latency can negatively impact user experience, especially for edge computing use cases that demand real-time processing. The close proximity of a local data center effectively addresses this issue.
  4. Resource Optimization: Repatriation isn't just a reactionary move; it can also be strategic. Companies are continuously looking to optimize IT resources, ensuring that workloads are aptly distributed across various environments and geographies.
  5. Skill Shortage: The diverse nature of hyperscaler cloud platforms demands a specific skill set, and a scarcity of talent to manage these platforms can lead organizations to reconsider their cloud-first approach.
  6. Vendor Lock-in Avoidance: Vendor lock-in is the situation where companies become overly reliant on a single cloud provider's infrastructure and services. This can prompt businesses to reassess and, if necessary, repatriate for long-term flexibility.


Cloud Repatriation Examples

Dropbox

In 2015, Dropbox, a cloud-based file storage and synchronization service, undertook a massive migration from AWS to its own private cloud infrastructure called "Magic Pocket." The primary drivers for this move included cost savings, performance optimization, and greater control over hardware and network configurations. As a company storing exabytes of data for over 500 million users, Dropbox faced escalating costs on AWS.

By transitioning to a private cloud, Dropbox reduced operational expenses while tailoring its system for better performance and security. The migration itself was executed over two years, involving meticulous planning to move petabytes of data without disrupting user experience. Financial statements revealed that the move had saved Dropbox almost $75 million in operational expenses over the first two years. The result was a more cost-effective, scalable, and high-performing storage service, validating the company's strategic decision to move away from public cloud services.


The Cloud Repatriation Strategy

Cloud repatriation is a complex and time-consuming process that requires careful assessment, planning, and execution to avoid disruption to business operations and data loss. As such, organizations must critically evaluate the overall benefits and risks before making the critical decision to move away from the cloud. Drawing on successful use cases, the following are steps and best practices to consider as part of a cloud repatriation strategy:

  1. Developing and evaluating the business case: Organizations should rigorously evaluate the business case to determine the motivations, projected outcomes, and benefits before starting the repatriation process. This assessment should include an end-to-end review of the current cloud environment, encompassing aspects like performance, cost, operations, compliance, and other relevant factors. It’s also important to consider the loss of some public cloud benefits, such as scalability and elasticity.
  2. Creating a migration plan: Once a decision has been made, the next step is to formulate a detailed project plan that outlines the specific steps, timelines, resources, and budgets required to complete the entire migration process. The plan must specify the types of migration tools, network connectivity, data transfer costs, data backup and recovery options, and infrastructure requirements. It should also incorporate contingency measures to address any unexpected problems, issues, or delays.
  3. Choosing the right infrastructure: Determining the optimal infrastructure for repatriation is crucial. Options include on-premises data centers, private clouds, and local cloud providers. Criteria such as compliance, performance, security, and scalability should guide the infrastructure selection process.
  4. Executing a successful migration: The actual repatriation process involves migrating applications, data, and workloads from the existing public cloud environment to the new on-premises or local cloud environment. This process can span weeks, months, or even years, depending on the size and complexity of the “to be” landscape and environment. Crucially, organizations need to ensure that they have skilled personnel in place, whether in-house or a professional service provider, to manage and execute the migration.
  5. Validating and optimizing the new environment: After the completion of the migration, organizations need to test and optimize their new environment to ensure it meets current and future performance, security, and governance/compliance requirements. Additionally, they should establish processes for the ongoing management and monitoring of the new environment.


Conclusion

At the outset, the hyperscalers and public clouds projected significant IT cost reductions and elevated operations/performances. For many, the initial results of going all-in on the public clouds did reduce their capex (IT software & infrastructure) as data centers were collapsed and expensive legacy infrastructure was divested. However, recent surveys conducted on CSPs and data center colocation providers found that cloud repatriation is happening on a global scale, albeit to varying degrees depending on the type of organization and their landscapes. In several recent global IT surveys, it was noted that around 70-80% of organizations are repatriating at least some data from hyperscalers and public cloud due to a variety of concerns, as discussed above.

Organizations must now seriously consider whether cloud repatriation is a viable strategy or explore other ways of achieving their objectives other than relying on hyperscaler/public clouds. Notably, they are now adopting a moderated “hybrid cloud” approach that combines on-premises infrastructure with certain workloads in the public cloud. Others are pursuing a private cloud, which offers some of the benefits of virtualized infrastructure in a single-tenant environment, either on-premises or in a collocated data center. Hence, the adoption and trend of cloud repatriation can vary widely depending on the specific needs and circumstances of each organization.



About the Author

Nicholas has an extensive career spanning over 20 years, during which he has played a major role in the ideation, formulation, and co-creation of new cloud solutions for both enterprise and government sectors. His expertise spans across multiple domains including Cloud, Data Centre, Network & Connectivity, which he blends to support organizational Digital Transformation & Modernization and Industry 4.0, including 5G. His proficiency is backed by various sales and pre-sales certifications from key global technology firms such as Sangfor, AWS, Microsoft Azure, Dell EMC, Google, IBM, NetApp, Oracle, Red Hat, and VMware, enabling him to adopt the latest global technology trends, especially in the cloud domains.

Share this content on your favorite social network today!