Cassandra Migration to EC2

This is a guest post by Tommaso Barbugli the CTO of getstream.io, a web service for building scalable newsfeeds and activity streams.

In January we migrated our entire infrastructure from dedicated servers in Germany to EC2 in the US. The migration included a wide variety of components, web workers, background task workers, RabbitMQ, Postgresql, Redis, Memcached and our Cassandra cluster. Our main requirement was to execute this migration without downtime.

This article covers the migration of our Cassandra cluster. If you’ve never run a Cassandra migration before, you’ll be surprised to see how easy this is. We were able to migrate Cassandra with zero downtime using its awesome multi-data center support. Cassandra allows you to distribute your data in such a way that a complete set of data is guaranteed to be placed on every logical group of nodes (eg. nodes that are on the same data-center, rack, or EC2 regions...). This feature is a perfect fit for migrating data from one data-center to another. Let’s start by introducing the basics of a Cassandra multi-datacenter deployment.

Cassandra, Snitches and Replication strategies

Going multi datacenter

In most scenarios, Cassandra comes configured with data-center awareness turned off. Cassandra multi-datacenter deployments require an appropriate configuration of the data replication strategy (per keyspace) and snitch. By default Cassandra uses the SimpleSnitch and Simple replication strategy. The following sections explain what these terms actually mean.

The Snitch

Snitches are used by Cassandra to determine the topology of the network. The snitch also makes Cassandra aware of the datacenter and the rack it’s in. The default snitch (SimpleSnitch) gives no information about the data center and the rack where Cassandra resides. This snitch only works for single data center deployments. The EC2MultiRegionSnitch is required for deployments on EC2 that spans among different regions. This snitch maps the region name as datacenter and the availability zone as the name of the rack. On top of this, it also make sure that nodes will use private and public IPs correctly.

Replication strategies

Replication strategies determine how data is replicated across the nodes. The default replication strategy is the SimpleStrategy. Using the SimpleStrategy, data is replicated across all nodes in the cluster. This doesn’t work for multi data center deployments due to latency constraints. For multi dc deployments the right strategy to use is the NetworkTopologyStrategy. This strategy allows you to specify how many replicas you want in each data center and rack. For example:

{ 'class' : 'NetworkTopologyStrategy'[’dc1' : 3, 'dc2' : 2]};

defines a replication policy where the data is replicated on 3 nodes for the datacenter ‘dc1’ and on 2 nodes for the datacenter ‘dc2’.

Changing snitch and replication strategy

As you can see, the snitch and replication strategy work closely together. The snitch is used to group nodes per datacenter and rack. The replication strategy defines how data should be replicated amongst these groups. If you run with the default snitch and strategy, you will need to change these two settings in order to get to a functional multi datacenter deployment.

Changing the snitch alters the topology of your Cassandra network. After making this change you need to run a full repair. If you do not fully understand the consequences of changing these two settings, you should really take some time to read more about it.Making the wrong changes can lead to serious issues. In our case, we started a clone of our production Cassandra cluster from snapshots and tested every single step to make sure we got the whole procedure right.

The 10 migration steps

The following list describes our migration in detail. You should make sure you understand these steps before trying to run your own migration. The steps below assume that you’re moving from a non EC2 data center to EC2. If you understand the steps you can use a similar approach for other scenarios.

Phase 1 - Cassandra Multi DC support

Step 1 - Configure the PropertyFileSnitch

The first step is to change from a SimpleSnitch to a PropertyFileSnitch. The PropertyFileSnitch reads a property file to determine which data center and rack it’s in.

After changing this setting you should run a Cassandra rolling restart. Make sure that you have the same property file on every node of the cluster. Here is an example of how a property file for two datacenters looks like:

# Data Center One 175.56.12.105 =DC1:RAC1 175.50.13.200 =DC1:RAC1 175.54.35.197 =DC1:RAC1 # Data Center Two 45.56.12.105 =DC2:RAC1 45.50.13.200 =DC2:RAC1 45.54.35.197 =DC2:RAC1

Step 2 - Update the replication strategy

Next update your keyspaces to use a NetworkTopologyStrategy

ALTER KEYSPACE "stream_data" WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 3 };

Step 3 - Client connection setting

Update your clients’ connection policy to DCAwareRoundRobinPolicy and set the local data center to ‘DC1’. This ensures your client will only read/write from the local data center and not from the EC2 cluster we’re going to create in the next step.

These three steps don’t have an impact on how replicas are placed in DC1 or how your clients connect to your cluster. The purpose of these three steps is to make sure we can add the second Cassandra data center safely.

Phase 2 - Setup Cassandra on EC2

Step 4 - Start the nodes

The next step is to start your cluster on EC2. Datastax provides a great AMI with easy instructions to get up and running quickly.

Step 5 - Stop the EC2 nodes and cleanup

By default your Cassandra instances will be configured as a new cluster. As we want to join an existing cluster we have to stop Cassandra and drop the data dirs from your new Cassandra nodes on EC2.

$ sudo /etc/init.d/cassandra stop # only if you have opscenter running $ sudo /etc/init.d/opscenterd stop $ sudo rm -rf /var/lib/cassandra/data/system/*
$ sudo rm -rf /var/lib/cassandra/commitlog/*

After that you have to adjust the following 4 Cassandra settings on EC2

  1. broadcast_address: <public_ip>
  2. listen_address: <private_ip>
  3. endpoint_snitch: Ec2MultiRegionSnitch
  4. auto_bootstrap: false

As you can see we are using a different snitch on the EC2 cluster than on the previous cluster. Ec2MultiRegionSnitch is a special snitch that infers the data center from EC2 region and the rack name from the availability zone (eg. a node running on us-east 1c would have datacenter: us-east and rack 1c).

Step 6 - Start the nodes

Start your Cassandra nodes on EC2 and wait for all nodes to be shown in the nodetool status.

$ sudo /etc/init.d/cassandra start $ nodetool status

Step 7 - Place data replicas in the cluster on EC2

Update your keyspace to replicate to the new datacenter

ALTER KEYSPACE "stream_data" WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 3, 'us-east' : 3 };

then run nodetool rebuild on every EC2 node with the name of the original DC as parameter

$ nodetool rebuild DC1

Depending on the amount of data in your keyspace, this will take minutes or hours. Once this is done, you should promote at least one node in the EC2 cluster to a seed node.

Phase 3 - Decommission the old DC and cleanup

Step 8 - Decommission the seed node(s)

Remove the IPs of nodes in the seed list that are in DC1 and run a rolling restart.

Step 9 - Update your client settings

Update your clients on EC2 to connect to the new data center. Update your clients’ connection policy to DCAwareRoundRobinPolicy and set the local data center to us-east. This makes sure your client will only read/write from the EC2 data center.

Step 10 - Decommission the old data center

Now you can decommission the DC1 datacenter following the steps from Datastax documentation.

Make sure that you update your keyspace to stop replicating to the old datacenter after you run the full repair.

ALTER KEYSPACE "stream_data" WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'us-east' : 3 };

This 10 step procedure seems long. But if you understand the basic concepts, it’s pretty straightforward. The best part is that your Cassandra cluster stays up and fast during entire procedure.

Alternative migration paths

1.) Dump and load the data

One alternative is to simply dump the data and load it up in the new Cassandra cluster. Unfortunately restoring node snapshots of a distributed database is not as simple as it would be with a traditional database. Another downside of this approach is that the cluster becomes temporarily unavailable. Stream already had production users, so we could not afford to take down the app.

2.) Rotate the nodes

Another approach is to simply add new nodes on EC2 and then decommission the old nodes. Adding and decommissioning nodes is a very simple operation. The problem with this option is that there often is a high latency between data centers. Running your migration by adding and removing nodes will increase the read and write latency. Depending on your requirements this may or may not be a viable alternative.

Conclusion

Migrating a Cassandra cluster across data centers takes a bit of time, but it works amazingly well. It’s possible to move from one datacenter to another without any downtime.

About Stream

Stream is a web service for building scalable newsfeeds and activity streams. Be sure to check out our tutorial, it explains our API in a few clicks.

On Hacker News

On reddit