Stuff The Internet Says On Scalability For August 10th, 2018

Hey, it's HighScalability time (out Thur-Fri, so we're going early):

London Maker Faire 1851—The Great Exhibition—100,000 objects, displayed along more than 10 miles, by over 15,000 contributors.

Do you like this sort of Stuff? Please lend me your support on Patreon. It would mean a great deal to me. And if you know anyone looking for a simple book that uses lots of pictures and lots of examples to explain the cloud, then please recommend my new book: Explain the Cloud Like I'm 10. They'll love you even more.

  • 90%: accuracy predicting gender from retinal image; $1 billion: Ebay sales per quarter from AI; $78 billion: global AI software market by 2025; $75m: penalty for botched SAP upgrade; 35 million: m^3 of mud dredged out of the Dutch waterways; 138 terabytes: memory per square inch; 500 million: Uber metrics per second; 22x: new faster JSON Sparser; 

  • Quotable Quotes:
    • @IanColdwater: The JIRA tickets will continue until morale improves
    • @david_perell: Three crazy stats from @mikedariano’s newsletter. 1. People watch more Minecraft hours than the NBA, NHL, NFL, and MLB combined.  2. Only 26 countries have more people than PewDiePie has subscribers.  3. Only 20% of YouTube’s traffic is from the United States. 
    • Charlie Demerjian: Why does SemiAccurate say that Intel knows? We have seen their internal documents that show exactly how frightened the company is. The documents go into specifics we don’t feel are appropriate to discuss publicly but there is one thing we can say, Intel knows their position. One of the documents says in no uncertain terms that the company understands they will not be competitive in the server market until AFTER Sapphire Rapids, the 2022 server part. AMD has a clear run in Intel’s core market for at least 4 years.
    • @ScottMcGready: Can we just take a moment to remember that one company I worked for backed up their stuff on tapes religiously- all tapes sent to a warehouse nightly. Years later someone tested a tape... turns out nothing had been written... ever. We had a (paid) warehouse full of empty tapes
    • Uber: Since 2016, Uber has added several new lines of business to its platform, including Uber Eats, Uber Freight, and Jump Bikes. Now, we complete over 15 million trips a day, with over 75 million monthly active riders. In the last eight years, the company has grown from a small startup to 18,000 employees across the globe.
    • @mims: Here is a super important thing that we don't talk about enough: Almost all of the increase in income inequality from 1978 to the present can be accounted for by the difference in wages between top performing firms and everyone else. And now we have some idea what's driving unequal growth in productivity of top-performing firms -- it's how they build and use *their own, proprietary software and other IT/technology
    • @taotetek: Distributed systems tip: Write your system without any queues first. You might find you don't need queues. If you end up needing queues, the retry and reliability code you wrote in order to function without queues will still make your system more reliable.
    • @theburningmonk: I think the visual flow is sometimes under-appreciated - our app support team can easily look at it and figure out what went wrong without knowing ins & outs of implementation details. I can also show the diagram to a product person and he/she would get it as well
    • John Mark: It’s time to understand something about open source software development: it is not going to save us. Using or developing more open source software is not going to improve anyone’s lives. Developing open source software is not a public good. It’s not going to result in a fairer or more equitable society. In fact, as currently structured, open source development is part of the problem. 
    • Charlie Demerjian: What Intel is not telling you, or the analysts, is that the 10nm you may get in late 2019 is not the 10nm they had intended to come out in 2015. More importantly this new process is a significant step backward from the 10nm they promised, as touted in their manufacturing day. How much of a step backwards? Several of SemiAccurate’s moles are saying it is effectively a 12nm process rather than a 10nm process, and the technical changes more than back that claim up. Don’t expect this to ever be publicly admitted to, it is still ’10nm’ and always will be even if the tech doesn’t back that name up.
    • @steipete: Tried the GDPR data export from Spotify. By default, you get like 6 JSON files with almost nothing. After many emails and complaining and a month of waiting, I got a 250MB archive with basically EVERY INTERACTION I ever did with any Spotify client, all my searches. Everything.
    • @swardley: which is why I recommend that before creating a taxi firm, you should at least build your own oil refinery, rig and pipeline or your own nuclear power plant, mine or solar power plant, rubber plantation (you don't want to get locked into tyres), automotive industry etc etc.
    • JPL: Several times per week, the DSN [Deep Space Network] antennas capture signals from the two Voyager spacecraft, which are exploring the edge of interstellar space. Their signal has a received power 20 billion times weaker than that of a digital wristwatch. 
    • Anonymous: As far as what they're buying—yes, they're avoiding paying more for a potential competitor later. But the inherent value in a talent acquisition comes from acknowledging that most projects in software fail. Finding a team that can actually ship something that gets out the door is rare. Even at big companies, most projects will not see the light of day. So to find a group of people that have managed to build something—even if it's small, even if it's humble—means they're probably a team that works well together. So they're worth a premium. That's the theory behind it, at least.
    • @pmddomingos:
      A tale of two AI summers:
      1980s                Now
      Expert systems  Deep learning
      More rules!        More data!
      LISP machines   GPUs
      Cyc                   DeepMind
      Brittleness         Brittleness
    • richardtallent: Largely, In 5-10 years, we'll still be maintaining the code we write today. I'm still maintaining and enhancing two web apps I created over 15 years ago. Frameworks and UI toolkits come and go, but the core line-of-business needs are remarkably stable.
    • @jbeda: There are a lot of smart people working with queuing theory that applies here. There are non-obvious results like LIFO works better than FIFO for good median latency and user experience.
    • lukasmericle: What differentiates dynamic programming from other recursion-based search methods is that we cache the results of the solutions to subproblems so that the computation need only occur once instead of  O(2m)  times, where  m  is the level of the subproblem. We also exploit a recursive definition of the problem so that we end up with an elegant and compact solution method. Dynamic programming is thus the happiest marriage of induction, recursion, and greedy optimization. The "dynamic" part of this approach is that we only have to apply one function repeatedly to the problem, and this function will return optimal values of the full problem as well as any sub- or superproblem.
    • Mikael Ronstrom: As we can see we can recover [using MySQL Cluster 7.6] a TByte sized data node within an hour and this is with 8 LDM threads. If we instead use 24 LDM threads the restore and rebuild index phase will go about 3 times faster and thus we would cut restart time by another 25 minutes and thus restart time would be about 20 minutes in this case and we would even be able to restart more than 2 TBytes within an hour.
    • Touche: I don't use SAM but I do use Lambda and I disagree with everything you say here. FaaS allows me to completely eliminate web server junk from my brain. I only have to think about individual functions (routes) that takes some input and produce an output. Essentially I can focus completely on my business domain.
    • yebyen: I agree that developer experience in Lambda is crap, I am a Ruby person though and my language of choice is not on their supported list. You can do it, but the runtime I found[1] looks like a toy and hasn't been updated since 2015. Google Cloud Functions on the other hand has this[2] (yay! ruby support) but I can't get excited about it, because we are "all-in" on AWS and it's politically impossible to suggest more diversity in this environment. I am however very excited about KNative, as I've been following the Riff project for a minute and it was looking really good (and the same people that built Riff, eg Pivotal, are in the drivers seat of building Knative now, as I understand it.) Scale to zero! Bring your own runtime... lots of nice features. But number one is, I can run it on my existing Kubernetes environment, and use it to make my deployments leaner and my base cluster scale can be smaller. Hand-in-hand with that, I can carry it with me to another cloud provider if need be.

  • A thought experiment: would Digg have survived if it had been in the cloud? The Epic Fail of Digg V.4 With Will Larson and Digg's v4 launch: an optimism born of necessity. The proximate cause for Digg's demise was Google. A shift in Google's search algorithm drove a lot less traffic to Digg, which meant Digg went from having a lot of revenue to not having a lot of revenue. This pushed Digg to change their monetization strategy. In response, Digg v4 added a social network layer to Digg. The idea is by becoming a social network like Facebook and Twitter, they wouldn't need to rely on poorly performing SEO driven ad units. The problem was Digg v4 was 2 years late and Digg was running out of money. They gambled. The decision was made to release Digg v4 early, before the money ran out, so they could see if it would bring Digg back from the grave. The problem was it wasn't close to ready. It fail-whaled from the start. A month went by before it was stablish. And users hated it. Why not just revert back to the old version? Here starts our thought experiment. Digg ran on physical servers and they didn't have the capacity to run both versions simultaneously. One would presume they didn't have the time or money to buy new capacity. You can see this comming, can't you? They installed Digg v4 over the old version of Digg. The old version of Digg was gone. They couldn't just redirect to the old version on failure. This is not how you are supposed to do it on a number of fronts. Iterating new features into Digg over time would have been better than a big bang release after two years of development. And you want the old version on standby so you can always fall back to it. Better yet, you want the new and old version running side by side, with some users on the new version, so you cut over slowly at a controlled pace, always ready to revert to the old version if necessary. But Digg didn't have the resources for any of that. In the cloud they would have. Provisioning new servers would have taken minutes. The cash outlay would have been small compared to buying a fleet of new metal servers. They could have afforded to keep the old version up and running while the new version ran on a completely different set of VMs. Also, Saying Yes To NoSQL; Going Steady With Cassandra At Digg

  • We are just at the beginning of discovery at scale using big data. Keynote Fireside Chat: Cloud and AI in Healthcare and Biomedical Research (Cloud Next '18). If your dobber's down, this has some really inspiring bits. Even with nearly every force running counter to progress, it looks like we might actually be able to solve a few problems.
    • There's as much information in one mammogram as there is in the entire NY city phonebook (they still have phonebooks?). By 2020 the total amount of knowledge in health care will double every 73 days. Ten thousand people a day turn 65. 
    • It cost $3 billion to sequence the first human genome. Now it costs $800. There are now about 500,000 genomes fully sequenced. That's a lot of data. You can't be a biologist today without data. 
    • There's more than beauty in the eye of the beholder. Once you have labeled data you may be able to discover things you never imagined. Just from retinal data you can predict easy things like retinal myopathy (I can see that looking at my own scans), but you can predict surprising things like gender and cardiovascular health to the same level as blood tests.
    • Seven years ago we knew none of the genes that played a role in schizophrenia. Today the number is over 200 genes. It came from collecting data from 120,000 people with and without schizophrenia. Data was analyzed looking for millions of genetic variants, models were built and verified. We only know what some of the genes do, for example, some have to do with pruning synaptic connections in the brain. Asking data is the best way to go. Same for cancer.  
    • We need to start thinking of truly massive data in health care. With the cloud can do things you could never to before. Imagine reading out from every cell its pattern of what genes are turned on or off. That's 20,000 genes. Every cell in your body is a point in a 20,000 dimensional space.
    • There's a world wide project called Human Cell Atlas that has the goal of creating comprehensive reference maps of all human cells.
    • With all that data cell data we can ask questions like what is a cell type? It's a cluster in a 20K dimensional space. What is development? It's a trajectory in 20K dimensional space. What are pathways and what programs are cells running? We're going to have to cluster and organize clusters of genes. Now you can take a cancer and disaggregate it into a collection of cells and ask what are all those cells doing? Combined with visualization you can ask what cells are thinking what when they're near who? 
    • A lot of health data will be built on an open source architecture. The hope is to get ahead of the problem of having incompatible systems, so everything can work together. It's described in A Data Biosphere for Biomedical Research. Code is on GitHub at Data Biosphere. There's also a NIH Data Commons.
    • Wait, you don't have a staff of AI experts handy? Google has an app for that. AutoML. AutoML performs a neural architecture search (this is Goolge) which is a method for finding good neural architectures for solving problems. A model generates architectures. You sample from that model, generating say 10 of them. All the models are thrown into the Thunderdome. The ones that worked well are used to generate new models. The process iterates until only one survives. Or something like that.

  • The John Henry of tech—serverless versus Kubernetes. A Tale of Two Teams: [Jane and Javier] now had over 2,500 restaurant customers and were generating north of $70,000 per month. They also had over 2 million registered users, with more than 40 million monthly page views...They discussed the business for a bit and agreed that there was still a lot more work to do. They reviewed their latest AWS bill, $740 for the month seemed a bit high, but it was still reasonable. They also agreed that the last 14 months of hard work had certainly paid off. They shook hands and both left very excited.

  • Many think Azure's Cosmos DB is today's leading world spanning distributed database offering. Not Google. Not Amazon. Microsoft. Want to learn more? So does Murat. That's why he's taking a year long sabbatical with Microsoft's Cosmos DB team. He's written an intro article Azure Cosmos DB: Cosmos DB is Azure's cloud-native database service. It is a database that offers frictionless global distribution across any number of Azure regions ---50+ of them! It enables you to elastically scale throughput and storage worldwide on-demand quickly, and you pay only for what you provision. It guarantees single-digit-millisecond latencies at the 99th percentile, supports multiple consistency models, and is backed by comprehensive service level agreements (SLAs).

  • Do you want to understand how serverless works across the big three at a detailed level? Here you go. Peeking Behind the Curtains of Serverless Platforms:
    • AWS Lambda achieved the best scalability and the lowest coldstart latency (the time to provision a new function instance), followed by GCF. But the lack of performance isolation in AWS between function instances from the same account caused upto a 19x decrease in I/O, networking, or coldstart performance.
    • Azure Functions used different types of VMs as hosts: 55% of the time a function instance runs ona VM with debased performance.
    • Azure had exploitable placement vulnerabilities [36]: a tenant can arrange for function instances to run on the same VM as another tenant’s, which is a stepping stone towards cross-function side-channel attacks.
    • An accounting issue in GCF enabled one to use a function instance to achieve the same computing resources as a small VM instance at almost no cost

  • Extending microservices all the way up through the UI. Experiences Using Micro Frontends at IKEA. A micro frontend architecture breaks the frontend into smaller parts to allow for teams to deploy autonomously, thus enabling continuous delivery for web frontends. Split a system vertically to create self-contained systems with both backend and frontend built by the same team. A technique called self-contained fragments allows for everything a fragment needs, like CSS and JavaScript, to be included. You shouldn’t have to think about the dependencies for a fragment, just include one ESI for styles, and one ESI for scripts, and you should be done. To get pages and fragments to work together setup a few rules and fix errors fast.

  • Triangulation 358: General Magic. Wonderful interview on the new documentary about the rise, fall, and fecund diaspora of General Magic—"the most important dead company in Silicon Valley." 
    • When you've created the Macintosh, what's important enough to work on next? The next iPhone. Only what if you tried to make it way before the iPhone was made and at a different company called General Magic? That you're not using a General Magic phone now let's you know how the story ended. They didn't make it. But the story of how they didn't make it, despite having the best and the brightest, is a cautionary tale for the ages. 
    • Great team. Great engineering. It produced the USB, the touch screen, and the software modem. They were too early. They missed the internet; they went with a special network from AT&T. It was at too high price point. The project worked on the Mac model. A talented team in a room for years to produce something great. Problem was they needed an adult in the room to make it ship. Every project needs a hammer. Iterate. It's important to let a product evolve over time. 
    • What does it take to bring big ideas to life? Really it's the hero's journey. It’s a path. It's never easy. Persist. Keep going, that’s how things are made. It’s not enough to love the thing that you’re making, you have to love the people you’re making it with. Yet, Sometimes the hero looses. 

  • At a certain scale you want to keep that extra 30% instead of giving it to an aggregator. 'Fortnite' will skip the Play Store for its Android release. Apple doesn't have this problem. The App store is their chokepoint.

  • Optimizing for throughput is not the same thing as optimizing for tail latency. Amdahl's Law for Tail Latency
    • Brawny Versus Wimpy Cores: for services with extremely low-latency requirements (such as in-memory caching and in-memory distributed storage),21 architects must focus on improving single-thread performance even at high cost. At the same time, some core parallelism is needed. A single 100BCE core performs significantly worse than four 25BCE cores...The need for high single-thread performance also motivates application- or domain-specific accelerators as a more economical way of improving performance than incremental out-of-order core optimizations...in a homogeneous system where throughput is the only performance metric of interest and parallelism is plentiful, the smallest cores achieve the best performance; see the 1BCE cores in Figure 4a. In comparison, when optimizing for throughput under a tail latency constraint, the optimal design point shifts toward larger cores, unless the latency constraint relaxes significantly...computer scientists should strive to remove serialization across the system stack...for heterogeneous architectures to make sense the system must closely track the input load and adjust to its changes. 
    • Caching:Existing server chips dedicate one-third to one-half of their area budget to caches. Our analysis indicates this trend will continue...Architects should focus instead on exploiting request parallelism in a way that keeps the large number of smaller cores busy...Serialized execution requires higher single-thread performance, and larger on-chip caches is one way to achieve such performance.

  • Building a distributed rate limiter that scales horizontally: For example, if you publish to a channel with 1000 subscribers, then that uses 1001 messages from your package allocation if it succeeds (one for the publish and one for each subscriber). We started sending that number (in a separate stat that isn’t aggregated into package usage) even if the publish attempt is rejected. This means that the rate limiter can now know the total that would have been sent and received had there been no suppression, which means it can just do a naive calculation of the suppression rate that would be needed to ensure that the successful publish would result in aggregate message rate exactly at the limit. This avoids the oscillation problem entirely.

  • E-Commerce at Scale: Inside Shopify's Tech Stack: Shopify powers 600K merchants and serves 80K requests per second at peak...As is common in the Rails stack, since the very beginning, we've stayed with MySQL as a relational database, memcached for key/value storage and Redis for queues and background jobs...We decided to use sharding and split all of Shopify into dozens of database partitions. Sharding played nicely for us because Shopify merchants are isolated from each other and we were able to put a subset of merchants on a single shard...As we grew into hundreds of shards and pods, it became clear that we needed a solution to orchestrate those deployments. Today, we useDocker, Kubernetes, and Google Kubernetes Engine to make it easy to bootstrap resources for new Shopify Pods. On the load balancer level, we leverage Nginx, Lua and OpenResty which allow us to write scriptable load balancers...The build of our monolith takes 15-20 minutes and involves hundreds of parallel CI workers to run all 100k tests...All systems at Shopify have to be designed with the scale in mind. At the same time, it still feels like you're working on a classic Rails app. The amount of engineering efforts put into this is incredible. For a developer writing a database migration, it looks just like it would for any other Rails app, but under the hood that migration would be asynchronously applied to 100+ database shards with zero downtime.

  • How I slashed a SQL query runtime from 380 hours to 12 with two Unix commands: Ideally, MariaDB should support sort-merge joins and its optimizer should employ them when the runtime of alternative strategies is projected to be excessive. Until that time, using Unix shell commands designed in the 1970s can provide a huge performance boost.

  • Machine learning will be good at copying styles, but may not be able to create the next big thing. What Makes a Hit: Breakout songs — those that reach the very top of the charts — simultaneously conform to prevailing musical feature profiles while exhibiting some degree of individuality or novelty. They sound similar to whatever else is popular at the time, but also have enough of a unique sound to help them stand out as distinctive. What that suggests is that a hit song, or any other cultural product – like a film, or a novel — can’t simply be reverse engineered from what’s been popular in the past. Popular success really is more art than science.

  • amark/gun: A realtime, decentralized, offline-first, graph database engine. GUN is an ecosystem of tools that let you build tomorrow's dApps, today. Decentralized alternatives to Reddit, YouTube, Wikipedia, etc. are already pushing terabytes of daily P2P traffic on GUN. We are a friendly community creating a free fun future for freedom.

  • m3db/m3 (article): Distributed TSDB and Query Engine, Prometheus Sidecar, Metrics Aggregator, and more. 

  • nasa-jpl/open-source-rover (article): an open source, build it yourself, scaled down version of the 6 wheel rover design that JPL uses to explore the surface of Mars. The Open Source Rover is designed almost entirely out of consumer off the shelf (COTS) parts. 

  • facebookincubator/fizz (article): Fizz is a TLS 1.3 implementation. Fizz currently supports TLS 1.3 drafts 28, 26 (both wire-compatible with the final specification), and 23. All major handshake modes are supported, including PSK resumption, early data, client authentication, and HelloRetryRequest.

  • github/glb-director (article): is a set of components that provide a scalable set of stateless Layer 4 load balancer servers capable of line rate packet processing in bare metal datacenter environments, and is used in production to serve all traffic from GitHub's datacenters.