UGFwZXI6IERvbuKAmXQgU2V0dGxlIGZvciBFdmVudHVhbDogU2NhbGFibGUgQ2F1c2FsIENvbnNp c3RlbmN5IGZvciBXaWRlLUFyZWEgU3RvcmFnZSB3aXRoIENPUFM=

Teams from Princeton and CMU are working together to solve one of the most difficult problems in the repertoire: scalable geo-distributed data stores. Major companies like Google and Facebook have been working on multiple datacenter database functionality for some time, but there's still a general lack of available systems that work for complex data scenarios.

The ideas in this paper--Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS--are different. It's not another eventually consistent system, or a traditional transaction oriented system, or a replication based system, or a system that punts on the issue. It's something new, a causally consistent system that achieves ALPS system properties. Move over CAP, NoSQL, etc, we have another acronym: ALPS - Available (operations always complete successfully), Low-latency (operations complete quickly (single digit milliseconds)), Partition-tolerant (operates with a partition), and Scalable (just add more servers to add more capacity). ALPS is the recipe for an always-on data store: operations always complete, they are always successful, and they are always fast.

ALPS sounds great, but we want more, we want consistency guarantees as well. Fast and wrong is no way to go through life. Most current systems achieve low latency by avoiding synchronous operation across the WAN, directing reads and writes to a local datacenter, and then using eventual consistency to maintain order. Causal consistency promises another way.

Intrigued? Let's learn more about causal consistency and how it might help us build bigger and better distributed systems.

In a talk on COPS, Wyatt Lloyd, defines consistency as a restriction on the ordering and timing of operations. We want the strongest consistency guarantees possible because it makes the programmer's life a lot easier.  Strong consistency defines a total ordering on all operations and what you write is what you read, regardless of location. This is called linearizability and is impossible to achieve strong consistency with ALPS properties. Remember your CAP. Sequential consistency still guarantees a total ordering on operations, but is not required to happen in real-time. Sequential consistency and low latency are impossible to achieve on a WAN. Eventual consistency is an ALPS system (Cassandra), but is a weak property that doesn't give any ordering guarantees at all.

There's a general idea if you want an always-on scalable datastore that you have to sacrifice consistency and settle for eventual consistency. There's another form of consistency, causal consistency, that sits between eventual consistency and the stronger forms of consistency. Causal consistency gives a partial order over operations so the clients see operations in order governed by causality. Theoretically causal consistency is a stronger consistency guarantee, that is also scalable, and maintains ALPS properties. It's a sweet spot for providing ALPS features and strongish consistency guarantees.

A key property of causal consistency to keep in mind is that it guarantees you will be working on consistent values, but it doesn't guarantee you will be working on the most recent values. That's a property of strong consistency. So under a network partition your operations won't match those in other datacenters until they are made eventually consistent.

The driver for causal consistency is low latency. They want operations to always be fast. Other approaches emphasize avoiding write-write conflicts via transactions and latency isn't as important. You'll never do a slow 2PC across a WAN.

Here's a money quote describing causal consistency in more detail:

The central approach in COPS involves explicitly tracking and enforcing causal dependencies between updates.  For instance, if you upload a photo and add it to an album, the album update “depends on” the photo addition, and should only be applied after it.  Writes in COPS are accepted by a local datacenter that then propagates them to other, remote, datacenters.  These remote datacenters check that all dependencies are satisfied by querying other nodes in the cluster before applying writes.  This approach differs from traditional causal systems that exchange update logs between replicas.  In particular, the COPS approach avoids any single serialization point to collect, transmit, merge, or apply logs.  Avoiding single serialization points is a major factor in enabling COPS to scale to large cluster sizes.Even though COPS provides a causal+ consistent data store, it is impossible for clients to obtain a consistent view of multiple keys by issuing single-key gets.  (This problem exists even in linearizable systems.)  In COPS-GT, we enable clients to issue get transactions that return a set of consistent values.  Our get transaction algorithm is non-blocking, lock-free, and takes at most two rounds of inter-datacenter queries.  It does, however, require COPS-GT to store and propagate more metadata than normal COPS. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.

Michael Freedman gives an example involving three operations on a social networking site:

  1. Remove boss from friends group.
  2. Post looking for a new job.
  3. A friend reads the post.

Causality is given by the following rules:

  1. Thread of execution rule. Operations done by the same thread of execution or ordered by causality. The first operation happens after the second.
  2. Gets-From rule. Operations that read a value are after write operations.
  3. Transitive closure rule. The first operation is before the read of the post.

The result is that operations happen in the order you expect. The post for a new job happens after the boss is removed from the friends group. In another example, a photo upload followed by adding a reference of the photo album will always happen in that order so you don't have to worry about dangling references. This makes the job of the programmer a lot easier, which is why we like transactional systems so much: the expected happens.

How does causality handle conflicting updates? Say two writes in different datacenters happen to the same key at the same time. This is unordered by causality because the operations do not occur in the same thread of execution. What we want all datacenters to agree on a value. By default the rule is to have the last writer win. You can have application specific handlers as well to that all datacenters converge on the same value. This sounds a lot like eventual consistency to me. They call this causal consistency + convergent conflict handling as causal+ consistency.

Their innovation is to create a causal+ consistent system that is also scalable. Previous systems used log shipping, which serializes at a centralized point. Instead of logs they use dependency meta data to capture causality. They replace the single serialization point with distributed verification. They don't expose the value of a replicated put operation until they confirm all the causally previous operations have shown up in the datacenter.

COPS is their system implementing causal+ consistency:

  • Organized as a geo-replicated system with a cluster of nodes in each datacenter.
  • Each cluster stores all data.
  • Scale-out architecture with many nodes inside each cluster.
  • Consistent hashing to partition keys across nodes.
  • Assumes partitions do not occur within a datacenter so strongly consistent replication is used within a datacenter. Use chain replication, though could use Paxos.
  • Between datacenters where latency is high, data is replicated in a causal+ consistent manner.
  • They use a thick client library. It tracks causality and mediates local cluster access. 
  • Value is written immediately to the local datacenter. Immediately queued up for asynchronous replication.
  • Clients maintains dependency information, which includes a version number uniquely identifying a value. This information is inserted into dependency list. Any future operations are causally after the current operation. This information is used to resolve dependencies in the system.
    • Why not just use vector clocks? Because they've targeted very large distributed systems where ther vector clock state would get out of control.
  • Get transactions give a consistent view of multiple keys with low latency. They only have read transactions. Write conflicts are handled by last writer wins or application specific reconciliation.
  • They've found their system gives high throughput and near linear scalability while providing causal+ consistency.

The details of how all this works quickly spirals out of control. Best to watch the video and read the paper for the details. The questioning at the end of the video is contentious and entertaining. I'd like to see that part go on longer as everyone seems to have their own take on what works best. It's pretty clear from the questions that there's no one best way to build these systems. You pick what's important to you and create a solution that gives you that. You can't have it all it seems, but what can you have is the question.

Will we see a lot COPS clones immediately spring up like we saw when the Dynamo paper was published? I don't know. Eventually consistent systems like Cassandra get you most of what COP has without the risk. Though COPS has a lot of good features. Causal ordering to a programmer is a beautiful property as are the ALPS properties in general. The emphasis on low-latency is a winner too. Thick client libraries are a minus as they reduce adoption rates. Complex client libraries are very difficult to port to other languages. Not being able to deal with write-write conflicts in an equally programmer friendly manner while maintaining scalability for large systems, is unfortunate, but is just part of the reality of a CAP world. You could say using a strongly consistent model in each datacenter could limit the potential size of your system. But, all together it's interesting and different. Low-latency, geo-distribution, combined with a more intuitive consistency model could be big drivers for adoption for developers, and it's developers that matter in these sorts of things.