advertise
« State of the CDN: More Traffic, Stable Prices, More Products, Profits - Not So Much | Main | Disks Ain't Dead Yet: GraphChi - a disk-based large-scale graph computation »
Friday
Jul202012

Stuff The Internet Says On Scalability For July 20, 2012

It's HighScalability Time:

  • 4 Trillion Objects: Windows Azure Storage
  • Quotable Quotes:
    • @benjchristensen: “What if we could make the data dense and cheap instead of sparse and expensive?” James Gosling @liquidrinc
    • @sinetpd360: People trying new things and sharing is what helps create scalability. Jim Rickabaugh #siis2012
    • @rbranson: This h1.4xlarge running 160GB PostgreSQL database pushing ~17,200 index scan rows/sec. r_await is 0.79ms, box is 92% idle.
    • @sturadnidge: faster net and disk greatly reduces repair time and impact so we can load up the instances with far more dat
  • With Amazon announcing 2TB SSD instances the age of SSD has almost arrived. Netflix has already published a very thorough post on the wonderfulness of SSD for both performance and taming the long latency tail. They see 100K IOPS or 1GByte/sec on a untuned system. Netflix projects: The hi1.4xlarge configuration is about half the system cost for the same throughput; The mean read request latency was reduced from 10ms to 2.2ms; The 99th percentile request latency was reduced from 65ms to 10ms. Vertical scaling gets a huge boost as the bottleneck will likely move from IO to CPU again. Software will need a rewrite to be SSD optimized. Think about removing caching layers. Think reserved instances to bring down the cost. Think putting hot data into SSDs. We'll also see pressure to fix TCP  interrupt bottlenecks and IRQ affinity problems.
  • The deeper we peer into the secret heart of the universe the more data we generate and the bigger the computer systems we need to make sense of it all. Query 32TB of a trillion particle dataset in 3 seconds
  • If like me you are wondering how we'll survive the heat-death of the universe, well, there's an app for that: a superconductive ring could serve as a time crystal. First you need an ion trap. In this state, the electric and magnetic fields are no longer needed to maintain the shape of the crystal and the spin of its constituent ions. The result is a time crystal – or indeed a space-time crystal, because the ion ring repeats in both space and time. You could be reborn a God in the next cyle of the Big Bang. 
  • Riddle me this asks James Hamilton: Why there are Datacenters in NY, Hong Kong, and Tokyo? It's all about latency, which is why you just can't move to Iceland and make it all better: Because of the cruel realities of the speed of light, companies must site data centers where their customers are. That’s why companies selling world-wide, often need to have datacenters all over the world. 
  • A Look at the Network - Searching for Truth in Distributed Applications. C. Scott Andreas finds  truth in thinking of applications and applications as dynamically fallible graph that can understood with deep insight into its nature and data provided by probing tools. 
  • Cassandra is getting virtual nodes. Here's the RFC: Cassandra Virtual Nodes, with a very interesting thread of conversation on the pros and cons of the design. Main concern seemed to be its a bad fit with the OrderedPartitioner. Also,  Cassandra connections are costly because of expensive per connection state.
  • The missing switch: High-performance monolithic graphene transistors created. ChuckMcM with insight on what tis really means: One of the more interesting things here is the production process. Now if instead you can 'print' the circuitry, and you do mixed signal stuff. Then you can make your entire system as one board sized integrated circuit. Think an LCD television (transistors on glass) where instead of each transistor being the same you put a CPU and various other peripherals there. Size is not as important but yield still is.
  • DBToaster-generated code: typically by a factor of 3-4 orders of magnitude faster than existing state of the art data-management systems. 
  • Kurt Monash with a good look at Metamarkets’ back-end technology for BI SaaS: Data lands on Amazon S3, Hadoop and Pig summarizes and denormalizes and then puts it back into S3, Hadoop is used to load data into Druid, Druid is a distributed analytic DBMS. Also Introduction to MemSQL, Disk, flash, and RAM.
  • AlwaysOn Availability Groups in AWS Revisited. Jeremiah Peschka with a good explanation of using "AlwaysOn Availability Groups and set up an asynchronous replica of your data into Amazon Web Services." 
  • Here's the video on Neil Conway on Bloom and CALM. I can tell we are still in a highly academic phase of the development cycle because I understood very little of what's going on. Though when someone asks you what kind of data your system works on you need a serious answer.
  • A Twist On Scalability. Karoly Negyesi explains how Drupal can use the SSI capabilities of nginx to "stitch together" logic: a request comes in; nginx issues a request to Webdis which reads the pieces out of Redis and serves them back to nginx; SSI processing happens recursively until we arrive at the final page. Adding in a proxy_cache directive to handle cached content supports serving 10,000 pages per second on modest hardware.
  • Facebook and Akamai are going SPDY.

 

 

This weeks selection:

 

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>