« What do we know about how Meerkat Works? | Main | 6 Ways to Defeat the Coming Robot Army Swarms »

Stuff The Internet Says On Scalability For April 3rd, 2015

Hey, it's HighScalability time:

Luscious SpaceX photos have been launched under Creative Commons.
  • 1,000: age of superbug treatment; 18 million: number of laws in the US
  • Quotable Quotes:
    • @greenberg: Only in the Bay Area would you find a greeting card for closing a funding round.
    • @RichardWarburto: "Do Not Learn Frameworks. Learn the Architecture"
    • Alex Dzyoba: Know your data and develop a simple algorithm for it.
    • @BenedictEvans: Akamai: 17% of US mobile connections are >4 Mbps. Most of the rest of the developed world is over 50%
    • Linus: Linux is just a hobby, won’t be big and professional like GNU
    • jhugg: This just lines up with what we've seen in the KV space over the last 5 years. Mutating data and key-lookup are all well and good, but without a powerful query language and real index support, it's much less interesting.
    • Facebook: Whatever the scale of your engineering organization, developer efficiency is the key thing that your infrastructure teams should be striving for. This is why at Facebook we have some of our top engineers working on developer infrastructure.
    • mysticreddit: Micro-optimization is a complete waste of time when you haven't spent time focusing on the meta & macro optimization
    • @adriancolyer: If you think cross-partition transactions can't scale, it's well worth taking a look at the RAMP model: 
    • @jasongorman: Microservices are a great solution to a problem you probably don't have
    • @dbrady: If 1 service dies and your whole system breaks, you don't have SOA. You have a monolith whose brain has been chopped up and stuck in jars.

  • Fascinating realization. We live in a world in which every tech interaction is subject to a man-in-the-middle attack. Future Crimes: All of this is possible because the screens on our phones show us not reality but a technological approximation of it. Because of this, not only can the caller ID and operating system on a mobile device be hacked, but so too can its other features, including its GPS modules. That’s right, even your location can be spoofed.

  • That's every interaction. Pin-pointing China's attack against GitHub: The way the attack worked is that some man-in-the-middle device intercepted web requests coming into China from elsewhere in the world, and then replaced the content with JavaScript code that would attack GitHub. 

  • Messaging and mobile platforms: If you take all of this together, it looks like Facebook is trying not to compete with other messaging apps but to relocate itself within the landscape of both messaging and the broader smartphone interaction model. 

  • Martin Thompson: Love the point that the compiler can only solve problems in the 1-10% problem space. The 90% problem space is our data access which is all about data structures and algorithms. The summary is he shows how instruction processing can be dwarfed by cache misses. This resonates for me with what I've seen in the field with customers in the high-performance space. Obvious caveat is applications where time is dominated by IO.

  • Not everything works well in the cloud. OnLive shuts down streaming games. Just a little ahead of its time.

  • A classic Ivan Pepelnjak answer to a question that often comes up when designing what seem like overly complex ways of connecting things together: What is Layer 2 and Why Do We Need It?: Do we still need layer-2? In many cases, the answer is no. Every device that uses software-based forwarding can act as a layer-3 forwarding device...Why are we still using layer-2? Because every vendor (apart from Amazon and initial heroic attempts by Hyper-V Network Virtualization team) thinks they need to support really bad practices that originated from the thick yellow coax cable environment.

  • Time to revamp all those neural networks. Memories May Not Live in Neurons’ Synapses: If memory is not located in the synapse, then where is it? When the neuroscientists took a closer look at the brain cells, they found that even when the synapse was erased, molecular and chemical changes persisted after the initial firing within the cell itself. The engram, or memory trace, could be preserved by these permanent changes. Alternatively, it could be encoded in modifications to the cell's DNA that alter how particular genes are expressed. 

  • Nicely said. JViz: Redis is an in-memory storage system; everything is in RAM. Riak is distributed disk storage system; everything is on hard drives. With Riak, depending on what storage backend you're using, the keys might have to fit into memory, but not the values. With Redis everything has to fit into memory.

  • Scaling Redis and Memcached at Wayfair. Great discussion of creating a minimum viable caching system. It goes through some of the possible alternatives and why they did what they did.

  • Wait, I can't trust John Oliver? Comcast didn't slow down Netflix? Why Your Netflix Traffic is Slow, and Why the Open Internet Order Won’t (Necessarily) Make It Faster: If you read the FCC filings I linked above, you will find that not even Netflix claims that any intentional throttling is taking place. This is an issue of congestion (and who should pay to relieve that congestion), not one of throttling (intentional or otherwise). < Isn't sufficiently advanced congestion the same as throttling?

  • Here's A collection of links for streaming algorithms and data structures.

  • High performance services using coroutines: It’s all about minimizing context switches and lock contention...the solution I propose and will be implemented is what I call ‘optimistic sequential execution’, based on my implementation of stackless co-routines...Because it really comes down to that. Avoiding stalls...This gives us almost perfect throughput and threads utilization. Coroutines are very cheap (a few bytes overhead) and very fast to schedule (also, thanks to free lists, there is no memory allocation required to create them). 

  • A Web Scale Case Study: Facebook as a File System: So here’s a fun way to think about Facebook:  The whole application, all of Facebook, is really just a file system.  It’s graph, rather than directory structured.  It is browser-facing rather than presenting a command prompt interface (although if you really want a command line…).   Also, it’s not optimized for things like deleting or overwriting data, but otherwise it is really just a very large conventional storage system: a giant closet that you and your contacts put data into and sometimes take data back out of.

  • An interesting experiment in voluntary simplicity. When memory is not enough...: The simple problem of sorting data in small memory revealed a whole class of peculiarities we don’t usually think of: Common widely used algorithms are not suitable for any problems; Dynamic debugging and profiling are extremely useful and demonstrative; I/O is a bitch, unless you fully rely on kernel; Multithreading is not a silver bullet for performance; Know your data, know your environment.

  • It turns out the blink tag was pretty much the joke we always thought it was.

  • Something we do all the time in software. Consciousness and the Social Brain: Instead of proposing an explanation of consciousness, he attributed consciousness to a magic fluid. By what mechanism a fluid substance can cause the experience of consciousness, or where the fluid itself comes from, Descartes left unexplained—truly a case of pointing to a magician instead of explaining the trick.

  • Interested in learning how LTE works? Here ya go: Deep dive: What is LTE? It's a few years old, but there's a lot of still relevant info.

  • Damn, Mystery Machine, what a great name. What does it mean? Murat explains all in Facebook's Mystery Machine: End-to-end Performance Analysis of Large-scale Internet Services: The goal of this paper is very similar to that of Google Dapper (you can read my summary of Google Dapper here). Both work try to figure out bottlenecks in performance in high fanout large-scale Internet services. Both work use similar methods, however this work (the mystery machine) tries to accomplish the task relying on less instrumentation than Google Dapper. The novelty of the mystery machine work is that it tries to infer the component call graph implicitly via mining the logs, where as Google Dapper instrumented each call in a meticulous manner and explicitly obtained the entire call graph.

  • Medium explains their on-call processes. Some details: our new rotation is simply three engineers working together for three weeks...We work to achieve a few specific goals: Deploy the site and supporting services; Keep the site up and respond to pages; Work with QA to improve the site’s quality; Tackle a secondary project related to site wellness...We call this rotation The Watch...Being coordinated is at the crux of running smoothly...The Watch uses Slack channels ‚Äčas our primary means of communication.

  • A peak behind the curtain. Germanwings flight 4U9525: what’s it like to listen to a black box recording?: The black box recorder is actually two separate components: a flight data recorder, which stores technical information – some 2,500 different measurements on a modern device – and a cockpit voice recorder, which keeps a tape of every word the pilots say. They are stored in the back of the aircraft, which has the best chance of surviving a crash, and are wrapped in titanium or stainless steel. They can survive an hour of 1,100-degree Celsius fire, or a weight of 227kg.

  • Your Developers Aren’t Bricklayers, They’re Writers. My guess is the best bricklayers or carpenters are X times better than the worst. That's been my experience anyway. It's a human distribution thing rather than a task specific thing. And have you priced carpenters lately? Not so junior. Also, Glenn Vanderburg of LivingSocial on why software development is an engineering discipline.

  • Now that's impressive. How Amazon Web Services Uses Formal Methods: When using formal specification we begin by stating precisely "what needs to go right." We first specify what the system should do by defining correctness properties, which come in two varieties: Safety. What the system is allowed to do. For example, at all times, all committed data is present and correct, or equivalently; at no time can the system have lost or corrupted any committed data; and Liveness. What the system must eventually do. For example, whenever the system receives a request, it must eventually respond to that request.

  • Icicle: We are open-sourcing our distributed k-sortable ID generation project called “Icicle”, which generates IDs using Lua scripting within distributed Redis hosts. 

  • hyflow-go: a geo-replicated, main-memory, highly consistent datastore

  • Riposte: An Anonymous Messaging System Handling Millions of Users: This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours.

  • Jitsu: Just-In-Time Summoning of Unikernels: Using fast shared memory channels, Jitsu provides a directory service that launches unikernels in response to network traffic and masks boot latency. Our evaluation shows Jitsu to be a power-efficient and responsive platform for hosting cloud services in the edge network while preserving the strong isolation guarantees of a type-1 hypervisor.

  • Greg has some more Quick Links for you.

Reader Comments (1)

Far-out take on data-driven design post: with so much of performance coming down to locality, predictability, and compactness (what you can fit in L2/RAM/SSD), we might see more activity focused on layout and fast compression.

That could mean stuff like hardware accel for already-"fast" compressors like Snappy or WKdm (AMD's ARM-based A1100 is supposed to have compression accel., but it seems like a special case), or branchless instructions to load variable-length ints. Or libraries, tools, compilers (and coder education?) work harder to encourage things like prefetching where performance matters. Or even languages could, say, make array-of-structures and structure-of-arrays look less drastically different--though I'm not going to hold my breath for languages to change.

I'm somewhat blind to where the biggest gains are and which tricks cost the most, so don't know what the future really looks like there. It just seems like access-time-as-speed is only becoming truer over time, and we're going to see more adaptation to it as a result.

April 5, 2015 | Unregistered CommenterR

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>