Hey, it's HighScalability time:
An astonishing 300 billion stars in our galaxy have planets. Take a look in the Eyes on Exoplanets app.
- 1 billion: people who used Facebook in a single day; 2.8 million: sq. ft. in new Apple campus (with drone pics); 1.1 trillion: Apache Kafka messages per day; 2,000 years: age of termite mounds in Central Africa; 30: # of times better the human brain is better than the best supercomputers; 4 billion: requests it took to trigger an underflow bug.
- Quotable Quotes:
- Sara Seager: If an Earth 2.0 exists, we have the capability to find and identify it by the 2020s.
- Android Dick: But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.
- @viktorklang: "If the conversation is typically “scale out” versus “scale up” if we’re coordination-free, we get to choose “scale out” while “scaling up.”
- Amir Najmi: At Google, data scientists are just too much in demand. Thus, anytime we can replace data scientist thinking with machine thinking, we consider it a win.
- @solarce: "don’t be be content that the software seems to basically work — you must beat the hell out of it" -- @bcantrill
- John Ralston Saul: I have enormous confidence in the individual as citizen. I don't think there is any proof in our 2,500 years of history that the elites do a good job without the close involvement of the citizenry.
- Joshua Strebel: on average Aurora RDS is 3x faster than MySql RDS when used with WordPress.
- Martin Thompson: I'd argue that "state of the art" in scalable design is to have no contention. It does not matter if you manage contention with locks or CAS techniques. Once you have contention then Universal Scalability Law kicks in as you have to face the contention and coherence penalty that contented access to shared state/resources brings. Multiple writers to shared state is a major limitation to the scalability of any design. Persistent data structures make this problem worse and not better due to path-copy semantics that is amplified by the richness of the domain model.
- Mike Hearn in a great interview on a16z, Hard Forks, Hard Choices for Bitcoin, had much to say on the future scalability of Bitcoin. One of the key ideas is that many of the things that people love about Bitcoin are based on Bitcoin's decentralized nature. Characteristics like it is permissionless, that it's the new gold, that it doesn't have a centralized policy committee, that it's a global network, and that it's a platform you can innovate on top of. One of the challenges with the keeping the current block size is that decentralization is already under stress. A certain amount of centralization has crept with ever bigger and bigger miners. With the collaboration of three or four companies they could start to apply some policy influence to Bitcoin and that would erode all the interesting properties that people love about Bitcoin. The challenge is to scale Bitcoin in balance with decentralization. Scaling and security as encapsulated by decentralization are tradeoffs. You can scale massively and lose decentralization and which point Bitcoin becomes Paypal. Yet if you keep the block size the same you make it so Bitcoin can't be used by a world wide audience.
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...