Why is Data Resilience Important?
Published 10/18/2022
Originally published by ShardSecure here.
Written by Marc Blackmer, VP of Marketing, ShardSecure.
What is data resilience?
Data resilience can mean different things to different organizations. As a Carnegie Mellon University literature review notes, the concept of resilience is often used informally, and sometimes inconsistently. For some companies, resilience may refer simply to maintaining data availability; for others, it might mean avoiding or neutralizing unexpected disruptions to data infrastructure.
In their overview of data risk management, ISACA offers a helpful definition: “[A] resilient data system can continue to operate when faced with adversity that could otherwise compromise its availability, capacity, interoperability, performance, reliability, robustness, safety, security, and usability.” Data resilience, in other words, is what allows organizations to maintain their critical operations during major disruptions and quickly return to normal afterwards.
Although no system can be 100% resilient to all threats, systems can be designed to rapidly and effectively protect critical business operations from unexpected events.
Below, we’ll dive deeper into what data resilience entails, what the costs and consequences of weak data resilience are, and how microsharding can help strengthen data resilience in multi-cloud and hybrid-cloud environments.
Data backups vs. data resilience
Without on-demand access to data — or, indeed, assurance that that data has integrity — business comes to a crashing halt. But are backups enough to guarantee data availability and integrity?
In short, no. Maintaining and protecting backups is one important facet of data resilience, but it’s far from the only one. Other aspects of data resilience include cluster storage, data redundancy, regular testing, disaster recovery, and more.
Having data backups means that you can, with time, rebuild your systems and files after a disruption. It may take a while, but you will eventually recover.
Having data resilience, on the other hand, means that your systems and files can continue functioning immediately after a disruption. It’s the difference between walking to a service station to get a new tire and having a spare tire in your trunk.
Data resilience: not just data availability
We mentioned above that data resilience can mean different things to different organizations. Some companies may even conflate data resilience with data availability, as on-demand access to data is a key part of many business operations.
However, another important component to data resilience is data integrity. Data integrity means that a given file is byte-for-byte identical to the file that was written — with no corruption, tampering, or modification by unauthorized users. In other words, data integrity guarantees the accuracy, completeness, consistency, and validity of an organization’s data.
Encryption is often used to ensure data integrity by preventing unauthorized access to and modification of data, but it does not protect availability during outages or attacks. To uphold the full CIA triad — data confidentiality, integrity, and availability — other solutions are necessary.
Real-world data resilience costs
The top consequences of weak data resilience are interruptions to business operations and the associated losses of revenue. These losses are often significant; according to Statista, the average cost of a critical server outage in 2019 was between $301,000 and $400,000 per hour.
These losses are also rising: A 2016 study by the Ponemon Institute revealed that the average cost of a data center outage rose from $505,502 in 2010 to $740,357 in 2015, a 38% increase. Similarly, ransomware attacks are on the rise, both in the number of attacks and the cost of ransoms. According to a report by Cybersecurity Ventures, ransomware cost the world $20 billion last year, up from $11.5 billion in 2019.
Meanwhile, the Uptime Institute Global Survey of IT and Data Center Managers offers some startling figures about the cost of data outages in the last three years:
- Over 60% of failures in 2022 have resulted in at least $100,000 in total losses, up substantially from 39% in 2019.
- The share of outages that cost upwards of $1 million has increased from 11% to 15% from 2019 to 2022.
- Outage trends suggest there will be at least 20 serious, high-profile IT outages worldwide this year, given the 27 serious or severe publicly reported outages in 2021.
Ultimately, the cost of a disruption depends on many factors: the industry, the size of the business affected, the type of recovery solutions the business has in place, and the specific details of the outage, attack, or data breach. But it can’t be denied that the cost of weak data resilience is high — and only expected to rise.
Other data resilience consequences
Although avoiding financial losses is by far the biggest incentive for companies to pursue strong data resilience, there are several other consequences. Some of the most significant repercussions of major disruptions and weak data resilience can include:
- Loss, damage, or destruction of mission-critical data
- Loss of organizational productivity
- Legal and regulatory impacts
- Reputational damages
- Recovery costs
- Customer dissatisfaction
- And more
From a legal standpoint, some companies’ service level agreements guarantee a certain amount of uptime or availability. Outages and attacks can interfere with delivering that level of uptime and lead to financial penalties.
Weak data resilience and downtime can also lead to reputational damage and lost business opportunities. For high-profile outages and cyberattacks, companies can experience a significant loss of confidence and trust from stakeholders. Even with smaller disruptions, customer churn may be unavoidable.
How can companies strengthen their data resilience?
To achieve stronger data resilience, organizations should implement controls that can quickly detect, respond to, and recover from adverse events — including cloud provider outages, ransomware attacks, data compromise, and more. These controls will typically involve not only effective technology solutions but also orderly disaster recovery procedures, fast timeframes, and clearly defined ownership of processes.
High availability — the ability of a system to operate continuously without a single point of failure — is also crucial for data resilience. Many organizations strive for 99.999% availability — a target that is difficult to achieve but important to aim for.
Multi-cloud and hybrid-cloud functionality is an increasingly important part of data resilience. As the Harvard Business Review notes, placing highly important processes with a low risk tolerance (e.g., airline reservation systems) in the cloud requires redundancy across multiple clouds. An ideal solution will work in multi-cloud configurations and allow flexibility in the event of disruptions.
Improve your data resilience with microsharding
A microsharding solution can offer an innovative approach to data resilience.
Microsharding works by shredding data into tiny fragments, or microshards, that are too small to contain so much as a complete birthdate or other piece of sensitive data. The process then removes file metadata and distributes the microshards across multiple logical containers of the user’s choice. The result is that unauthorized users can only access an unintelligible fraction of a complete data set, ensuring data confidentiality.
Microsharding also promotes data integrity and availability in multi- and hybrid-cloud environments, leading to stronger resilience. Some applications of microsharding technology offer multiple data integrity checks to detect unauthorized modifications and reconstruct affected data whenever it is tampered with, deleted, or lost in an outage. When the instances of the microsharding technology are virtual clusters, high availability can also be ensured.
As a result, microsharding technology can help organizations restore their compromised data, avoid downtime, and maintain business continuity.
Sources
System Resilience: What Exactly Is It? | Carnegie Mellon University SEI Blog
Data Resilience Is Data Risk Management | ISACA
What Is Data Integrity? Types, Risks and How to Ensure | Fortinet
2015 Cost of Data Center Outages | Vertiv
How Much Does Internet Downtime Cost a Business? | ICTSD.org
Global Ransomware Damage Costs Predicted To Exceed $265 Billion By 2031 | Cybercrime Magazine
Related Articles:
The Evolution of DevSecOps with AI
Published: 11/22/2024
It’s Time to Split the CISO Role if We Are to Save It
Published: 11/22/2024
Establishing an Always-Ready State with Continuous Controls Monitoring
Published: 11/21/2024
The Lost Art of Visibility, in the World of Clouds
Published: 11/20/2024