« Startups are Creating a New System of the World for IT | Main | Snooze - Open-source, Scalable, Autonomic, and Energy-efficient VM Management for Private Clouds »

Stuff The Internet Says On Scalability For May 4, 2012

It's HighScalability Time:

  • Quotable quotes:
    • Richard Feynman: Suppose that little things behave very differently than anything big
    • @orgnet: "Data, data everywhere, but not a thought to think" -- John Allen Paulos, Mathematician
    • @bcarlso: just throw out the word "scalability". That'll bring em out
    • @codypo: Here are the steps to the Scalability Shuffle. 1: log everything. 2: analyze logs. 3: profile. 4: refactor. 5: repeat.
    • @FoggSeastack: If math had been taught in a relevant way I might have been a  person today.
    • @secboffin: I know a programming joke about 10,000 mutexes, but it's a bit contentious.
  • Twitter gets personal with  Improved personalization algorithms and real-time indexing, a tale of a real-time tool chain. Earlybird is Twitter's real-time search system. Every Tweet has its URLs extracted and expanded. URL contents are fetched via SpiderDuck. Cassovary, a graph processing library, is used to find important connections. Then Twitter's search engine is used to find URLs shared in the circle of important connections. Those links are converted into stories and the stories are ranked by tweet frequency. A lot of stuff going on in near-real time.
  • Inktomi's Wild Ride - A Personal View of the Internet Bubble. When the bubble bursts, it hurts. Here's Dr. Eric Brewer story of "the fascinating history of Inktomi, and gives an up close and personal view of what the Internet bubble meant -- both on the way up and on the way down."
  • Under the hood: Data diving with Scuba. Facebook creates a system for the real-time, ad-hoc analysis of arbitrary datasets.
  • Replicated/Fault-tolerant atomic storage. Murat Demirbas makes an unusual promise for a technical paper: There is this elegant algorithm for replicated/fault-tolerant atomic storage that I think every distributed systems researcher/developer should know about. It is simple and powerful. And, it is fun; I promise your brain will feel better about itself after you learn this majority replication algorithm. The algorithm employs majority replication to provide atomic read/write operations in the presence of crash failures under an asynchronous execution model.
  • The Database As Queue Anti-Pattern. Mike Hadlow really doesn't like using the database as a queue. Really. First, code has to poll for new work, then there's all the inefficiencies, then you have to archive completed work, and lastly programmers will stuff all sorts of highly coupled state crap in the "queue." He's right, but it's not as bad as all that. Create a service API and when the database stops working for you you can move to something else. 
  • Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask). Excellent discusson, but I fear Actors require a lot of programmer rigour and discipline and we all know how well that goes over.
  • Good Google Groups thread on the Best NoSqL for multiplayer android games.
  • Why Postgres. Good ammo in case you need help justifying your true love against hipster hateraide. On Hacker News. Also, Sharding Postgres with Instagram
  • When I first read that a Transatlantic ping is faster than sending a pixel to the screen, my BS detector went to defcon 5. But, it's plausible, as the MythBusters would say. The reason why is rooted in bad software: Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.
  • It's not obvious how to be insanely simple. Adrian Cockcroft reflects on three books he has read and finds they support his ideas on architecture: A key part of what I have been doing for Netflix is looking out into the future of cloud and related technologies and developing a portfolio of fuzzy strategies and options. They don't all work out, but by having a well instrumented but loosely coordinated architecture that doesn't have central control and strict processes we can iterate rapidly, adopt (and discard) interesting new technologies as they come along. We can all have more fun and less frustration making Netflix Insanely Simple, and ignore all the bad common sense advice and analyst opinions that swirl around everything we do.
  • Interview with Mike Stonebraker: In my opinion the best way to organize data management is to run a specialized OLTP engine on current data. Then, send transaction history data, perhaps including an ETL component, to a companion data warehouse. A “two system” solution also avoids resource management issues and lock contention, and is very widely used as a DBMS architecture.
  • In designing a network you might think every switch talking to every other switch, a full mesh, would be the best design. Getting rid of the middleman is usually a good thing. Not so it seems. Ivan Pepelnjak in Full mesh is the worst possible fabric architecture has a great explanation why that would be:  unless you have a perfectly symmetrical traffic pattern, you waste most of the intra-fabric bandwidth. For example, if you’re doing a lot of vMotion between servers attached to switches A and B, the maximum throughput you can get is 20 Gbps (even though you have 140 Gbps of uplink bandwidth on every single switch). Also, The Future of TRILL and Spanning Tree – Part 2.
  • Random number generators for massively parallel simulations on GPU: We provide a broad review of existing CUDA variants of random-number generators and present the CUDA implementation of a new massively parallel high-quality, high-performance generator with a small memory load overhead.
  • Most performance problems are on the front end. In Front End Performance Case Study: GitHub,  JP Castro does a deep dive on improving GitHub's UI performance. JP shows, with detailed explanations, how to test performance and find solutions. 
  • An Early Evaluation of the Scalability of Graph Algorithms on the Intel MIC Architecture: Our results on a prototype board show that the multi-threaded architecture of Intel MIC can be effectively used for hiding latencies

Reader Comments (1)

Replicated/Fault-tolerant atomic storage seems incomplete. It does nothing to address partially replicated writes that reached a majority but never the total replica set.

I am talking about functionality like read repair, hinted handoff, and anti-entropy.

May 4, 2012 | Unregistered CommenterAriel Weisberg

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>