Stuff The Internet Says On Scalability For November 30, 2012

We're back and it's HighScalability Time:

  1. 1B Tweets Every 2.5 Days: Twitter. 1 billion transactions/day: Salesforce. 
  2. Storing 700 terabytes of data into a single gram of DNA. Downside, reading is very slow. And any data might conflict with the messages aliens have already inserted.
  3. Assuming my infonome is 1 TB, it would cost $1,338,333 to store my existence in Amazon Glacier for a long nowish 10,000 years. #notbad
  4. Quotable Quotes:
    • @cloudpundit: @Werner: "I've hugged a lot of servers in my life, and believe me, they do not hug you back. They hate you." #reinvent
    • @jinman: Werner #reinvent The commandments of 21st century architectures 1) Controllable, 2)Resilient, 3)Adaptive and 4) Data Driven #cloud
    • @dandonovan78: Wow. Netflix video streaming has grown from 1M hours to 1 BILLION hours a month in less than 4 years. Insane. #scalability #aws #reinvent
    • @sandfoxuk: Linear scalability - the spherical cow of cloud systems…. #PlanningFail
    • @rbranson: the year is 2020. Atomic clocks now embedded in ARM SoCs. Spanner is commodity. Java still limited to 20GB heaps.
    • @stu: "high margins covers for a lot of sins; you don't have to be efficient unless you are low margin" Bezos #reinven
  5. Why I Have Given Up on Coding Standards. As someone with some experience with coding standards, I'll just note that master craftsman operate to a detailed contract where the patron specifies virtually everything of interest. It's very far from a results only mindset. All those people we thought of as great artists before Michelangelo were considered craftsman in their day, someone with a skill employed for money, like a baker. As a programmer on a team you are operating on behalf of a patron and it's their rules over an artistic/star/look at me temperament. To argue for a design standard and not a coding standard is like saying an architect shouldn't care how their bricks are made when it's really the quality of the bricks that dictates structure under stress. If you are building something that isn't under stress you don't need an architect, you don't need to worry about structure, you can just let your heart move you. But when you are building something that matters, that someone is paying for, that is subject to extreme stress, then structure dictates both form and function and a key part of structure is code, so that's why you have a standard. Unfortunately with code we are still stuck in a highly empirical age, so the standards are experience based and highly arguable, so there's a lot of room for displeasure. But Gothic cathedrals were built to the sky using lessons learned from experience and that's essentially what we are doing with software systems. 
  6. Netflix has taken an interesting approach towards resiliency engineering with Hystrix. Typically endpoints are in charge of dealing with failure. Often poorly. Netflix has introduced a proxy architecture that implements various strategies like Circuit Breaker. Looks well thought out and well documented. Except the Thread Pools. Thread Pools are evil.
  7. Your Objects, the Unix Way. With ls having about fifty command line options, complexity creeps in everywhere over time. The way is the way that is no way
  8. Why we like competition, even between our ecosystem masters:
  9. CERN generates over a 100 petabytes of data every year and will soon be pushing all that data around with a new terabit network
  10. Counting is always surprisingly difficult at scale. counters + replication = awful performance? Great discussion on making counters work in Cassandra.
  11. Ben Stopford Where does Big Data meet Big Database with a very useful way of looking at the recent accelerated evolution of databases: The market is converging on both sides towards a middle ground and integrated suites of complementary tools.
  12. Facebook with fascinating details on the techniques (JIT, side exits, HipHop bytecode, type prediction, parallel tracelet linking) they used to make their dyanmic PHP compiler as efficient as their static compiler at serving traffic at scale. Impressive work.
  13. Wired on how Facebook Tackles (Really) Big Data With 'Project Prism':  In short, it automatically replicates and moves data wherever it’s needed across a vast network of computing facilities. “It allows us to physically separate this massive warehouse of data but still maintain a single logical view of all of it,” Parikh says. “We can move the warehouses around, depending on cost or performance or technology…. We’re not bound by the maximum amount of power we wire up to a single data center.”
  14. What changed when nothing seemed to change? Boundary uses their tools to uncover that all Amazon instances are truly not created equal.
  15. Why spend all your time on BigData trying to make others do things when you have all this SoloData that could reveal the path to self knowledge? Quantified Self.
  16. Bufferbloat – What Can You Do Today to Suffer Less. Great descriptions and summary of the bufferbloat problem. You can never get back latency. You have to saturate a link before you see problems. Performance cliff all over the internet.
  17. Deep Value calculates Amazon’s EC2 service is 380% more expensive than running on their own hardware. Maybe they need a discount? Sources say that Amazon now offers special deals including discounts to enterprise companies doing as little as $250,000 a year in AWS business.
  18. Understanding when things go right is just as important as understanding when they stink says Raymond Chen: The non-obvious thing is that the performance metrics also look for sudden improvements in performance. Maybe the memory usage plummeted, or the throughput doubled. Generally speaking, a sudden improvement in performance has one of two sources.
  19. Two cons against NoSQL. Part II: 1) t’s very hard to move data out from one NoSQL to some other system, even other NoSQL. 2) There is no standard way to access a NoSQL data store.
  20. GraphChi: Runs very large graph computations on just a single machine, by using a novel algorithm for processing the graph from disk (SSD or hard drive). Programs for GraphChi are written in similar vertex-centric model as  GraphLab. GraphChi runs vertex-centric programs asynchronously (i.e changes written to edges are immediately visible to subsequent computation), and in parallel. GraphChi also supports streaming graph updates and changing the graph structure while computing.
  21. Full table scan vs full index scan performance: For a non covering index, the different between a full table scan and an execution plan based on a full index scan is basically the difference between sequential reads and random reads: it can be close if you have fast storage or it can be very different if you have slow storage. < Surprising and interesting discussion on the MySQL Performance Blog.
  22. Improving block protocol scalability with persistent grants: When running more than 6 concurrent guests, there’s a notable speed improvement of the persistent grants implementation, and at 15 guests we are able to perform about 1.24 million IOPS (compared to the previous 340.000 IOPS)
  23. Nation States are still good for a grand gesture: China moves to beat U.S. in exascale computing.
  24. Lessons from groupon: release like you eat, make it local.