Stuff The Internet Says On Scalability For September 4th, 2015
Hey, it's HighScalability time:
An astonishing 300 billion stars in our galaxy have planets. Take a look in the Eyes on Exoplanets app.
- 1 billion: people who used Facebook in a single day; 2.8 million: sq. ft. in new Apple campus (with drone pics); 1.1 trillion: Apache Kafka messages per day; 2,000 years: age of termite mounds in Central Africa; 30: # of times better the human brain is better than the best supercomputers; 4 billion: requests it took to trigger an underflow bug.
- Quotable Quotes:
- Sara Seager: If an Earth 2.0 exists, we have the capability to find and identify it by the 2020s.
- Android Dick: But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.
- @viktorklang: "If the conversation is typically “scale out” versus “scale up” if we’re coordination-free, we get to choose “scale out” while “scaling up.”
- Amir Najmi: At Google, data scientists are just too much in demand. Thus, anytime we can replace data scientist thinking with machine thinking, we consider it a win.
- @solarce: "don’t be be content that the software seems to basically work — you must beat the hell out of it" -- @bcantrill
- John Ralston Saul: I have enormous confidence in the individual as citizen. I don't think there is any proof in our 2,500 years of history that the elites do a good job without the close involvement of the citizenry.
- Joshua Strebel: on average Aurora RDS is 3x faster than MySql RDS when used with WordPress.
- Martin Thompson: I'd argue that "state of the art" in scalable design is to have no contention. It does not matter if you manage contention with locks or CAS techniques. Once you have contention then Universal Scalability Law kicks in as you have to face the contention and coherence penalty that contented access to shared state/resources brings. Multiple writers to shared state is a major limitation to the scalability of any design. Persistent data structures make this problem worse and not better due to path-copy semantics that is amplified by the richness of the domain model.
- Mike Hearn in a great interview on a16z, Hard Forks, Hard Choices for Bitcoin, had much to say on the future scalability of Bitcoin. One of the key ideas is that many of the things that people love about Bitcoin are based on Bitcoin's decentralized nature. Characteristics like it is permissionless, that it's the new gold, that it doesn't have a centralized policy committee, that it's a global network, and that it's a platform you can innovate on top of. One of the challenges with the keeping the current block size is that decentralization is already under stress. A certain amount of centralization has crept with ever bigger and bigger miners. With the collaboration of three or four companies they could start to apply some policy influence to Bitcoin and that would erode all the interesting properties that people love about Bitcoin. The challenge is to scale Bitcoin in balance with decentralization. Scaling and security as encapsulated by decentralization are tradeoffs. You can scale massively and lose decentralization and which point Bitcoin becomes Paypal. Yet if you keep the block size the same you make it so Bitcoin can't be used by a world wide audience.
- Where have we heard of the power of controlling information before? Researchers claim that Google could 'rig the 2016 election': Through five experiments in two countries, they found that biased rankings in search results can shift the opinions of undecided voters by 20% or more, sometimes even reaching as high as 80% in some demographic groups.
- Balance is the most difficult quality to attain. Worst-Case Distributed Systems Design: This notion of worst-case-improves-average-case is particularly interesting because designing for the worst case doesn’t always work out so nicely...By handling the worst case, I lose in the average case...Often, there’s a pragmatic trade-off between how much we’re willing to pay to handle extreme conditions and how much we’re willing to pay in terms of average-case performance...Essentially, the defining feature of our distributed systems—the network—encourages and rewards us to minimize our reliance on it.
- The Mobile CPU Core-Count Debate: Analyzing The Real World created quite a stir. 130+ comments on the article, 80+ comments on reddit, and a paltry 20+ comments on HackerNews. What's all the fuss about? The question this excellently written and researched article addresses is: how useful are homogeneous 8-core designs would be in the real world on mobile Android devices? Common wisdom says it's overkill. The rather suprising conclusion: Android devices can make much better use of multi-threading than initially expected. There's very solid evidence that not only are 4.4 big.LITTLE designs validated, but we also find practical benefits of using 8-core "little" designs over similar single-cluster 4-core SoCs.
- I wonder what it thinks when run on itself? Computer Scientists Find Bias in Algorithms.
- The problem is we've tried putting the entire world in the database many times before. It doesn't scale. Sorry. infrastructure as a database: The more I work with infrastructure as code the more I dislike it. I think that’s because fundamentally it’s the wrong metaphor. Code is terrible, it’s brittle and constantly breaking...The proper metaphor is infrastructure as a streaming/reactive database...So where is the infrastructure as a database movement?
- You might think partitioned queries are faster. Not always. Why is This Partitioned Query Slower?: For these reasons, table partitioning is typically not a great fit for SQL Servers with an OLTP pattern where slow queries are the biggest pain point. Traditional index tuning and query rewrites will usually get you better performance with less hassle.
- Josh Corman in an interview on DevOps Cafe shows a humbling depth of purpose and resolve that serves as a reminder that a single person with a mission can make a difference. Truly wonderful. On a more mundane level the idea of single sourcing components for security reasons is interesting. A company that has 80 different logging frameworks is opening themselves up to hacking risk. Who is checking all those frameworks for vulnerabilities? Is updating them when vulnerabilities are found? Most likely no one. There is a conflict here with microservices. This is the typical swing between order and disorder that we see so often in the world. Yes, a single validated supplier in the supply chain for components is a good idea, but so too is letting teams do their own thing. Again, it's a balance thing. We suck at balance.
- Here's How CockroachDB Does Distributed, Atomic Transactions. The steps: Switch, Stage, Flip, Unstage. mrtracy: In terms of Google products, CockroachDB much more closely resembles a combination Spanner and F1. Excellent discussion on Hacker News.
- Poetry it's not. A lament to the Enterprise of yesteryear: We're being hit by disruptive innovation! Our industry is being commoditised! Our business is complex! We've created a strategy! Hired the best! They said they were experts! Marketing is key! And the future is private! Or Enterprise! Or Hybrid! The problem is our culture! But we have a solution! And this time it'll be different! Or I will rend thee in the gobberwarts with my blurglecruncheon, see if I don't!
- Writing custom software for high performance is often a competitive advantage, but it can go so very wrong. Here's a well told story of such a case. In it the hero does not overcome, he wisely retreats to fight another day. The most obsolete infrastructure money could buy - my worst job ever: they had their own ingenious in-house programming language that you could think of as an imperative Erlang with a Pascal-like syntax that was compiled to C source...The result of compiling this C code would then be run on an ingenious in-house operating system...
- From the hard to understand change the world department...Quantum Engineering of Superconducting Qubits: Quantum information will lead to a second information processing revolution. Public-key encryption can be broken in an hour by a quantum computer based on superconductors running at 100 megahertz. Conventional methods would require for a petaflop computer until the end of the universe.
- Agreed. RESTful APIs, the big lie: REST is a great mechanism for many things such as content delivery, and it has served us well for two decades. But it's time to break the silence and admit that the RESTful API concept is probably one of the worst ideas ever widely adopted in web software.
- An interesting story about stuff: From Pony Express to Amazon Drone: The Strange History.
- Lots and lots of juicy details on How imgix Built A Stack To Serve 100,000 Images Per Second: The core infrastructure of imgix is composed of many service layers. There is the origin fetching layer, the origin caching layer, the image processing layer, the load balancing and distribution layer, and the content delivery layer.
- Jeff Dean with Software Engineering Advice from Building Large-Scale Distributed Systems. When you've said Jeff Dean what more is there to say?
- A Gentle Introduction to Lockless Concurrency. Excellent coverage of Optimistic Concurrency, Compare and Swap, Atoms, Thinking in Increments, Immutable Data Structures, and Persistent Data Structures.
- Backpressure at the application level is a deep key to scalable performance. Here's an excellent explanation of how backpressure works: How Flink handles backpressure.
- Nice overview of the Berkeley DB: Architecture. Simple enough to understand while being highly functional.
- The densest sentence to unpack I've read in a while. Olivier Paugam: My goal was not so much to describe how we do eventing. I'd rather get people to look at Mesos under a slightly different angle and realize they can truly get high velocity and fluidity by just inverting the traditional framework model and let the containers be the real orchestrator.
- A very detailed story on how Stripe migrated bajillions of database records. As glossed by manigandham: This is a pretty normal way to handle things: double writes, active sync/migration, double reads, disable old writes, finish sync, disable old reads.
- Four Lessons That 30+ Years of Software Development Has Taught Me About Building Games: If it ain’t fun, you’re doing it wrong; Learn all the time because knowledge and skills are easy to bear; Learn all the time because knowledge and skills are easy to be; Language is irrelevant but mandatory.
- Go GC: Prioritizing low latency and simplicity. Great discussion on Hacker News. If you are worried that Google will dissapear Golang then take heart, they are planning out to 2025, which is like forever in tech years. In that big memory future it appears Go will use huge memory as a way to reduce the number of garbage collection cycles. Which is good, but it does mean you will be "wasting" a lot of memory. I guess it's a balance thing. Yet I wonder: is garbage collection really a needed hypothesis?
- A few of the 16 product things Sam Gerstenzang learned at Imgur: Every interface can be made simpler; Every feature you launch is a feature you’ll need to support; Our natural inclination is to assume our past success is because of our past actions; Find the one thing you need to get right and spend most of your time on it; Most partnerships are a waste of time.
- Now we know to disrupt yourself before someone else does it to you. In 1975, this Kodak employee invented the digital camera. His bosses made him hide it. But then there is human nature: “When you’re talking to a bunch of corporate guys about 18 to 20 years in the future, when none of those guys will still be in the company, they don’t get too excited about it”
- Here are the components Nathan Fritz used to build Yeti Threads on node.js: JSON Web Tokens; Postgresql; hapi; VeryModel; Gatepost; PGBoom; Websockets.
- Ah, more balance problems. Minecraft Billionaire Markus Persson Hates Being a Billionaire.
- Here's Part 2 of How does the Linux kernel handle a system call.
- Unison Ceph: storage cluster with demonstrated superior performance of up to 1.8x for writes and 2.3x for reads
- Rusty: Rusty is a light-weight, user-space, event-driven and highly-scalable TCP/IP stack. It has been developed to run on a EZChip TILE-Gx36 processor. A simple web-server using this new framework got a 2.6× performance improvement, when compared to the same application layer running on the new reusable TCP sockets introduced in Linux 3.9.
- Dkron: it's a distributed system to run scheduled jobs against a server or a group of servers of any size.
- AsterixDB: A Scalable, Open Source BDMS: AsterixDB is a full-function BDMS (emphasis on M) that is best characterized as a cross between a Big Data analytics platform, a parallel RDBMS, and a NoSQL store, yet it is different from each. Unlike Big Data analytics platforms, AsterixDB offers native data storage and indexing as well as querying of datasets in HDFS or local files; this enables efficiency for smaller as well as large queries. Unlike parallel RDBMSs, AsterixDB has an open data model that handles complex nested data as well as flat data and use cases ranging from “schema first” to “schema never”. Unlike NoSQL stores, AsterixDB has a full query language that supports declarative querying over multiple data sets.
- Software-Defined Caching: Managing Caches in Multi-Tenant Data Centers: We present Moirai, a tenant- and workload-aware system that allows data center providers to control their distributed caching infrastructure. Moirai can help ease the management of the cache infrastructure and achieve various objectives, such as improving overall resource utilization or providing tenant isolation and QoS guarantees, as we show through several use cases. A key benefit of Moirai is that it is transparent to applications or VMs deployed in data centers. Our prototype runs unmodified OSes and databases, providing immediate benefit to existing applications.