Stuff The Internet Says On Scalability For January 24th, 2014

Hey, it's HighScalability time:

Gorgeous image from Scientific American's Your Brain by the Numbers

  • Quotable Quotes: 
    • @jezhumble: Google does everything off trunk despite 10k devs across 40 offices. 
    • @KentLangley: "in 2016. When it goes online, the SKA is expected to produce 700 terabytes of data each day" 
    • Jonathan Marks: It's actually a talk about how NOT to be creative. And what he [John Cleese] describes is the way most international broadcasters operated for most of their existence. They were content factories, slave to an artificial transmission schedule. Because they didn't take time to be creative, then ended up sounding like a tape machine. They were run by a computer algorithm. Not a human soul. There was never room for a creative pause. Routine was the solution. And that's creativities biggest enemy.

  • 40% better single-threaded performance in MariaDB. Using perf, cache misses were found and the fix was using the right gcc flags. But the big hairy key idea is: modern high-performance CPUs, it is necessary to do detailed measurements using the built-in performance counters in order to get any kind of understanding of how an application performs and what the bottlenecks are. Forget about looking at the code and counting instructions or cycles as we did in the old days. It no longer works, not even to within an order of magnitude.

  • Herb Sutter dilates on Thread Safety and Synchronization, Oversharing, and other things peculiar. When Socrates asks you a question, it's probably a question worth thinking about.

  • Google Compute Engine: What are the advantages of Google Compute Engine over Amazon's cloud offering? Great answer on Quora: Cheaper pricing and sub-hour pricing; Load Balancer needs no pre-warming; Persistent Disks that can be connected to multiple VMs; Better Block Storage; Integrated Networking; Better network throughput; Multi Region Images; Persistent IP Addresses; Faster Boot Times & Auto Restart of VMs; Live Migration.

  • In China everything is bigger. There are 34.88 million trips by air, 235 million trips by rail, and 2.85 billion road trips during the peak of China’s Spring Festival holiday period. The GemFire in-memory data grid was tasked to handle the load: Seventy two UNIX systems and a relational database were replaced with 10 primary and 10 back up x86 servers, a much more cost-effective model that holds 2 terabytes or one month of ticket data in memory. Holiday travel periods create peaks of  10 million tickets sold per day, 20 million passengers to visit the web site, 1.4 billion page views per day, and 40,000 visits per second driving up to 5 million tickets sold online per day at peak. 

  • Five Things About Scaling MongoDB: create the right indexes for your queries; On Linux, choose ext4 or xfs; Since MongoDB is constantly accessing its files, you can get significant performance by telling Linux not to track files' access times; calculate working set size correctly; use SSD when storing data large than RAM; shard. 

  • Backblaze, who know a lot about disks, found Hitachi drives more reliable than Seagate and Western Digital. 

  • Home of the free. Home of the brave. Home of the disconnected. Reining in the Cost of Connectivity: It is abundantly clear that the United States’ collective broadband experience does not measure up to that of other countries. Americans often face higher prices, slower speeds, and a frequently frustrating consumer experience overall. A number of these problems persist because of failures to act by policymakers at all levels.

  • The Drop Box Outage post-mortem. It happened as it often does, on an upgrade. Active machines had a new OS installed which impacted some master-replica pairs and the site went down. The fix is more checks, which are of course themselves a future source of potential error. Also, recovery was slow because recovering data from MySQL backups is slow. The fix is a tool that parallelizes the replay of binary logs.

  • Speaking what happens after death, here are 51 Startup Failure Post-Mortems. A lot of the usual, which doesn't mean it's not worth reading, or it wouldn't be the usual. I liked poetic finality of this advice from Sonar: "We focused on engagement, which we improved by orders of magnitude. No one cared."

  • Henrik Joreteg makes a case for how web apps can compete in the off-line world of mobile apps. The web has outgrown the browser. (A web-lover's web-loving rant.): Call me an optimist, but I think the capabilities that ServiceWorkers promise us, will shine a light on the bizarre awkwardness of the concept of opening a browser to access offline apps. The web platform's capabilities have outgrown the browser.

  • The Lambda architecture: principles for architecting realtime Big Data systems: you should be able to run ad-hoc queries against all of your data to get results, but doing so is unreasonably expensive in terms of resource. The idea is to precompute the results as a set of views, and you query the views. I tend to call these Question Focused Datasets (e.g. pageviews QFD).

  • Nice step-by-step on Architecting Highly Available ElastiCache Redis replication cluster in AWS VPC from Harish Ganesan. Not exactly simple, but the GUI makes tractable. 

  • How Misaligning Data Can Increase Performance 12x by Reducing Cache Misses: The main differences are that there isn’t just one garage-cache, but a hierarchy of them, from the L1. If we use the cache optimally, we can store 32,768 items. But, since we’re accessing things that are page (4k) aligned, we effectively lose the bottom log₂(4k) = 12 bits, which means that every access falls into the same set, and we can only loop through 8 things before our working set is too large to fit in the L1! But if we’d misaligned our data to different cache lines, we’d be able to use 8 * 64 = 512 locations effectively. 

  • PayPal with a good article on Building infinite scrolling in the PayPal mobile web app: The core of the infinite scroll is the ability to reuse cells that are no longer visible.

  • Brendan Gregg analyzed the virtualization performance of Zones, Xen, and KVM. Excellent description of their respective I/O paths in detail: Zones add no overhead, whereas Xen and KVM do, which could limit network throughput to a quarter of what it could be. Lots of love for DTrace as an ace investigative tool. 

  • Still impressive. Scaling the Super Cloud: His company has made big waves and proven that the combination of Amazon servers and their own innovations can open new infrastructure options for users with HPC applications. For instance, they recently spun up a 156,000-core Amazon Web Services (AWS) cluster for Schrödinger to power a quantum chemistry application across 8 geographical regions. While many of you can project what a supercomputer of that magnitude might cost, the duration of their run to sort compounds cost them around $33,000—and ran in less than a day distributed across 16,788 instances.

  • Akka ran a large test on Google Compute engine and were able to reach 2400 nodes as well as starting up a 1000 node cluster in just over four minutes. It took 15 to 30 seconds to add 20 nodes.

  • Nice How To: Hosting on Amazon S3 with CloudFront. Lots of details on how to get your static site running on S3. 

  • Free book Scaling Big Data with Hadoop and Solr by Hrishikesh Karambelkar. I have not read it but the overview makes it look like it may be a good source of information.