« Sponsored Post: Animoto, deviantART, Hadapt, Clustrix, Percona, Mathworks, AppDynamics, ScaleOut, Cloudkick, Membase, CloudSigma, ManageEngine, Site24x7 | Main | Stuff The Internet Says On Scalability For May 13, 2011 »

Building a Database remote availability site

The AWS East Region outage showed all of us the importance of running our apps and databases across multiple Amazon regions (or multiple cloud providers). In this post, I’ll try to explain how to build a MySQL (or Amazon RDS) redundant site.

For simplicity, we create a passive redundant site. This means that the site is not used during normal operation and only comes into action when the primary site crashes. There are many reasons for choosing such an architecture – it’s easy to configure, simple to understand, and minimizes the risk of data collision. The downside is that you have hardware just sitting around doing nothing.

Still, it’s a common enough scenario. So what do we need to do to make it work?


We need to synchronize the database. This is done by means of database replication. Now, there are two options for database replication: synchronous and a-synchronous. Synchronous replication is great; it ensures that the backup database is identical to the primary database. However, it also means that each DML operation on the primary database must be executed on the backup, and the result is sent to the client only after the backup database has completed the operation (at least at commit level). This is very slow when the backup database is located in a remote geographical location (as a good friend of mine once said – sometimes the speed of light is just too slow).

A-synchronous replication is better for remote sites, but can result in data loss, depending on the “lag” between the primary database and the backup database.

We MySQL customers (RDS customers included) only have a- synchronous replication (and, starting from 5.5, semi- synchronous replication, which on most deployments will behave in the same way as a-synchronous replication).

After the replication is configured, the application needs to be updated to make sure that it recognizes database crashes, and will failover to the backup database.


There are several possibilities for crashes and their detection.

The first, and simplest, is if the entire region is down, which causes both the application servers and the databases to crash. In this case, the CDN, or a DNS load balancing technique, will failover to the backup region. This requires sys-ops work but is transparent to the application.

To read more, check out our original post here

Reader Comments (6)

If this interests you – just register for our beta here.

Is it a joke?

May 15, 2011 | Unregistered Commenterzerkms

This is little more than an advertisement for the ScaleBase product. Please label it as such. One is tricked into thinking this is an article providing guidance and inspiration for how one might implement such a system, but the link just leads to a beta registration link.

May 15, 2011 | Unregistered CommenterHans

Hans and zerkms, that's on the linked to post, not the post on this site so I let it slide. There is some content there, but obviously a close call.

May 15, 2011 | Registered CommenterTodd Hoff

Todd: Thank you for taking the time to reply. I would suggest that you prefix the title with "ScaleBase: ", thereby clearly labeling this blog post as a product related article. I rather like the approach to advertisement this blog takes in general, especially with sponsored posts.

Tthank you for a great blog.

May 15, 2011 | Unregistered CommenterHans

Hi guys I'm sorry if you feel this way. This was definitely not my intention. I think that beside the last paragraph there is nothing here that descries ScaleBase...

May 15, 2011 | Unregistered CommenterLiran Zelkha

Database failover like this is one of those painful decisions. If you're like most "normal" scaling customers, or at least like us, you've got some significant redundancy within your primary datacenter. For MySQL that at least means two database servers running in master-master (possibly with MMM, whatever). Failing completely over to your backup datacenter DB means that pulling everything back "locally" to most of your production appservers is just going to suck. Not an unmanageable amount of suck, but suck nonetheless.

We look at disaster recovery as being "okay" if it needs manual intervention for just this reason. Losing a machine, or a few machines, is not a disaster and production should continue unabated. Losing a primary datacenter completely may happen for many reasons. Detecting that that happened may or may not flake out more often than it happening in the first place. Sometimes having to have someone physically push a button, or a couple of buttons, before a one-way process is initiated, is okay.

NOTE: That assumes that even "a little" data loss is less acceptable than a short downtime. For many shops this isn't true. YMMV.

May 16, 2011 | Unregistered CommenterRichard

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>