advertise
« Instagram Strategy to Radically Reduce Traffic: Kill all the spambots! | Main | Stuff The Internet Says On Scalability For January 23rd, 2015 »
Monday
Jan262015

Paper: Immutability Changes Everything by Pat Helland

I was excited to see that Pat Helland has published another thought provoking paper: Immutability Changes Everything. If video is more your style, Pat gave a wonderful talk on the same subject at RICON2012 (videoslides).

It's fun to see how Pat's thinking is evolving over time as he's worked at Tandem Computers (TransactionMonitoring Facility), Amazon, Microsoft (Microsoft Transaction Server and SQL Service Broker), and now Salesforce.

You might have enjoyed some of Pat's other visionary papers: Life beyond Distributed Transactions: an Apostate’s OpinionThe end of an architectural era: (it's time for a complete rewrite), and Idempotence Is Not a Medical Condition.

This new paper is a high level overview of why immutability, the idea that destructive updates are not allowed, is a huge architectural win and because of cheaper disk, RAM, and compute, it's now financially feasible to keep all the things. The key insight is that without data updates, coordination in a distributed system becomes a much simpler problem to solve. RedHat is linking microservices and containers with immutability.

Immutability is an architectural concept that's been gaining steam on several fronts. Facebook is using a declarative immutable programming model in both the model and the view. We are seeing the idea of immutable infrastructure rise in DevOps. Aeron is a new messaging system that uses a persistent log to good advantage. The Lambda Architecture makes use of immutability. Datomic is a database data that treats data as a time-ordered series of immutable objects.

If that's of interest, then you'll like the paper.

Overview:

There is an inexorable trend towards storing and sending immutable data. We need immutability to coordinate at a distance and we can afford immutability, as storage gets cheaper. This paper is simply an amuse-bouche on the repeated patterns of computing that leverage immutability. Climbing up and down the compute stack really does yield a sense of déjà vu all over again

 

It wasn’t that long ago that computation was expensive, disk storage was expensive, DRAM was expensive, but coordination with latches was cheap. Now, all these have changed using cheap computation (with many-core), cheap commodity disks, and cheap DRAM and SSD, while coordination with latches gets harder because latch latency loses lots of instruction opportunities. We can now afford to keep immutable copies of lots of data, and one payoff is reduced coordination challenges.

Designs are driving towards immutability. We need immutability to coordinate at ever increasing distances. We can afford immutability given room to store data for a long time. Versioning gives us a changing view of things while the underlying data is expressed with new contents bound to a unique identifier

Related Articles

Reader Comments (2)

I really don't buy the immutability hype, as I see a number of misconceptions

- Immutability can increase/enable/ease concurrent processing when applied at asynchronous boundaries (e.g. message passing between threads using memory/network)
- "Preemptive Immutability" aka copying all data even if they are subject to serial, single threaded processing imposes a massive performance hit. Selling "Copy all your data in order scale" is a hoax. I have not come over a real world benchmark/application where this actually lead to anything but consuming more CPU to achieve mediocre if not ridiculous results.
- Machines have gotten faster + memory bigger, but requirements also rose: more users, more data, more realtime. Investing hardware gains into immutability will let you solve yesterdays requirements with todays hardware. Not competitive.

The underlying theoretical assumption if flawed because:

Yes, latches are expensive, so is moving data.
Immutabie systems have to move *a lot* of data, so this becomes the bottleneck (unfortunately cost of moving data increases with number of cores/processing units).

Practice tells one has to carefully reduce both factors to get best value out of many core hardware: reduce/avoid locks and reduce/avoid movement of data. First scale up, *then* scale out.

January 27, 2015 | Unregistered CommenterRuediger Moeller

@Ruediger Moeller: SSD disks do the same thing. They write almost immutable data. It is not a "misconception".

SSD do it so well, like it was a normal fast disk. Data are written only to erased pages of size between 2 KB to 16 KB. The logical address of data is continuously remapped by Flash Translation Layer (FTL) to the last physical address of that written data. Pages are erased together by continuous aligned blocks of 128 or 256 pages. Nothing can be rewritten without complete erase of the block. The erase of block is the slowest operation that takes several milliseconds. Some types of flash can survive 5k Program/Erase cycles, other types 100k cycles. Therefore wear leveling is used and data are never written to the same place repeatedly. The FTL is also written to flash. Random write of small or unaligned data (smaller than a few megabytes) lead to fragmentation of free space and FTL mapping. The data are copied and remapped by SSD garbage collector in spare time in order to prepare continuous blocks that can be erased then, otherwise the disks get slow and with long latency. (Coding for SSDs - part 1. The consequences are mostly in the part 6.)

Some data centers prefer Log Structured Merge Tree databases (LSM tree) also, in order to improve the write performance on SSD, even that more read operations are necessary. Old data need not be moved and merged until a high percent of rows has been updated or deleted. The merge operation can run incrementally by steps of a few megabytes, with pauses and without filling the cache. The lowest merge levels can run preferably in periods of moderate traffic. I can imagine that it is runnig better than the internal SSD garbage collector after many smaller random write requests.

Pat Helland said it expressively and so simplified to inspire to start thinking other way. All people know that nothing is usually forever. Not all merge levels are probably the same. Users can profit from this architecture: The last merge operation can be combined inexpensively with a full backup. Users can enable a trash bin, full history of important fields etc. All these "decorations" can be postponed a little to the time of normal load if the recent information can be found in a good LSM tree.

October 8, 2016 | Registered CommenterHynek Černoch

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>