How to Minimize Downtime from IT Outages

IT outages happen so often these days it can be hard to keep track. Most recently, a typo at Amazon “didn’t quite break the internet”, but caused headaches for hundreds of thousands of websites across the US. Another well-reported blunder occurred in late January when a Delta Air Lines outage prompted it to cancel 280 flights. This followed an
outage in August that cost Delta
$150 million and another at Southwest Airlines that cost that airline $177 million last July.

Data center downtime costs are always expensive. Last year, a study by the Ponemon Institute found that unplanned outages at data centers cost $8,851 per minute. Unfortunately, current backup methods to minimize data downtime have shortcomings across the board. Backup hardware can be prohibitively expensive to purchase and run, and cloud backups are often so slow to restore that, in the event of data loss, the cost of downtime is crippling.

Locally pooled storage has emerged as the best alternative for getting your data back, when you need it.

More Hardware vs. the Cloud

One logical safeguard to minimize your risk is simply to double up on your existing hardware. That way, if one of the systems goes down, there’s still a backup. This makes intuitive sense and is the same reason we all have more than one house key.

The issue is that computer hardware is a lot pricier and more complex than a key. In addition to the upfront capital cost of purchase and setup, there are requirements for additional physical space, ongoing maintenance, and hardware updates every three to five years. Figuring out capacity can also be a challenge. Buy too little and you impact your company’s ability to function. Buy too much and you’re wasting money.

Being on-premise also opens up the hardware to all types of vulnerabilities. Employee accidents pose a constant hazard. For instance, it is ‘speculated’ that a lone worker’s soda spill may have taken down Bloomberg’s terminals across the country for two hours in 2015.

The common alternative to on-premise hardware is the cloud. While you won’t have the same hardware related issues in the cloud, you will have a tough time getting your data back in a timely manner. Cloud providers reassure users that cloud is now an option for tier 1 backup, but relying on 3rd party telecoms to transmit high volumes of data as and when required is a high-risk strategy. Even in urban areas with high-speed connections, recovery time for cloud-based systems are often so long that it can be quicker to physically mail hard drives than to restore data from cloud. Unreliable in the best connected environments, cloud as a viable tier 1 recovery method is met with outright dismissal in many rural areas of the US and in most of the rest of the world.

Hidden charges of the cloud are also often forgotten. Basic cloud storage might look cheap but be aware of your access patterns and the recoverability you are required to have. Reads, edits, deletes, and geographic redundancy all come at an additional cost.

Given these two options, the leading strategy is a hybrid approach that aims to reduce the risk of failure posed by the weaknesses and pitfalls each presents individually.

However, piecing two problematic solutions together to combat each other’s risks is a suboptimal solution at best.

So what is optimal?

Locally Pooled Storage in Hybrid

A smarter alternative to on-premise physical, cloud-based, or the combination hybrid backup is a solution that manages to combine the best pieces of each whilst eliminating the disadvantages: locally pooled storage with a cloud tier.

Locally pooled storage takes advantage of spare hard drive space in an office environment, combining it to create secure drives that are distributed across many existing workstations and servers. Thus, your available storage grows with your primary infrastructure. It enables users to create a “private cloud” for onsite backup, without the costs of purchasing and maintaining additional hardware. Many businesses don’t realize that they have terabytes of space already paid for and never used.

Locally pooled storage is replicated across many machines so there’s no central point of failure. The data is encrypted and chunked before being distributed across the network, so if a computer in the system is lost or stolen, there’s no discoverable data on any one machine and the backup automatically recovers to your desired replication level. Similarly, if a chunk is corrupted for any reason, the system identifies it and a non-corrupt version is re-replicated.

Best of all, installation is simple: It only takes a few minutes to get a locally pooled storage system up and running.

Tiered to a cloud layer for offsite backup, this model is a smarter core infrastructure than is currently available. The local storage pool offers high availability, fast local recovery times, and intelligent self healing, while reserving cloud for rare disaster recovery events.

Dodging a Bullet

We are in a dynamic environment in which game-changing tech solutions come around quickly. As downtime continues to threaten vibrant businesses and onsite hardware continues to require too much TLC, it’s time to consider an alternative to the status quo. A solid backup infrastructure and executable, tested plan is a form of insurance that can save companies millions of dollars. But make sure your system actually works. Will yours be ready when you need it?

Leave a Reply

Your email address will not be published. Required fields are marked *