Elasticity for the Enterprise -- Ensuring Continuous High Availability in a Disaster Failure Scenario

Many enterprises' high-availability architecture is based on the assumption that you can prevent failure from happening by putting all your critical data in a centralized database, back it up with expensive storage, and replicate it somehow between the sites. As I argued in one of my previous posts (Why Existing Databases (RAC) are So Breakable!) many of those assumptions are broken at their core, as storage is doomed to failure just like any other device, expensive hardware doesn’t make things any better and database replication is often not enough.

One of the main lessons that we can take from the likes of Amazon and Google is that the right way to ensure continuous high availability is by designing our system to cope with failure. We need to assume that what we tend to think of as unthinkable will probably happen, as that’s the nature of failure. So rather than trying to prevent failures, we need to build a system that will tolerate them.

As we can learn from a recent outage event in one of Amazon's cloud data centers

, we can’t rely on the data center alone to solve this type of failure. The knowledge of how to manage failure must be built into our application:

"By launching instances in separate Availability Zones, you can protect your applications from failure of a single location," Amazon notes in a FAQ

on its Elastic Compute Cloud service"

In this post i discussed how a leading wall street firm managed to apply those same principles to handle such a disaster failure scenario while ensuring  continues availability of their real time web application.

You can read the full details here