Stuff The Internet Says On Scalability For October 26th, 2018
Wake up! It's HighScalability time:
Sometimes old school is best.
Do you like this sort of Stuff? Please support me on Patreon. I'd really appreciate it. Know anyone looking for a simple book explaining the cloud? Then please recommend my well reviewed book: Explain the Cloud Like I'm 10 (30 reviews!) They'll love it and you'll be their hero forever.
- 23%: fraudulent ad impressions; 10: years jQuery File Upload Plugin has been vulnerable; 240%: numpywren’s compute efficiency (total CPU-hours) improvement on serverless; 2 weeks: time it takes to create a billion MySQL tables; 1,000: parts in 3D printed rocket; 70%: decrease in face-to-face interactions in open offices; $100+ million: commercial open source software company revenue index; 200 million: ebikes in China;
- Quotable Quotes:
- Tim Cook: We at Apple are in full support of a comprehensive federal privacy law in the United States. There, and everywhere, it should be rooted in four essential rights: First, the right to have personal data minimized. Companies should challenge themselves to de-identify customer data—or not to collect it in the first place. Second, the right to knowledge. Users should always know what data is being collected and what it is being collected for. This is the only way to empower users to decide what collection is legitimate and what isn't. Anything less is a sham. Third, the right to access. Companies should recognize that data belongs to users, and we should all make it easy for users to get a copy of, correct, and delete their personal data. And fourth, the right to security. Security is foundational to trust and all other privacy rights.
- Eric Feng: there’s now 1 venture capital investment being made every hour of every day, 7 days a week, 365 days a year.
- @Werner: Never let facts interrupt a "good story.” Tried to help reporter get it right, but clickbait won. Our Fulfillment Centers have migrated 92% of DBs from Oracle to Aurora with better avail, less bugs and patches, less troubleshooting, less hw cost. More:
- @swardley: I occasionally hear companies announcing they are kicking off a "DevOps" program and I can't but feel sympathy for them. 7 years from now, when they finally finish, they will be in a world of capital flow / conversational programming and well ... it's a bit sad really.
- @cornazano: Building the wrong thing is a nightmare. Contraining the engineers tends to lead to poorer results; giving them choices produces a better chance of success. #DOES18
- @markimbriaco: The greatest trick the devil ever pulled was convincing the world that the network is reliable.
- @troyhunt: I've been moving *a lot* of @haveibeenpwned stuff to @AzureFunctions lately which has massively cut my app services costs (hundreds per month). Just checked my Function usage for the last month: 77,500,000 executions and 1,580,000,000,000 execution units which costs... $33.59
- Jesse Allen: Arm announced its new roadmap promising 30% annual system performance gains on leading edge nodes through 2021. These gains are to come from a combination of microarchitecture design to hardware, software and tools. They are branding this new roadmap ‘Neoverse.’ The first delivery will be Ares – expected in early 2019 – for a 7nm IP platform targeting 5G networks and next-generation cloud to edge infrastructure. Synopsys released QuickStart Implementation Kits (QIKs), including scripts and reference guide, for Neoverse processors in 7nm process technology.
- Peter Bright: But saying Microsoft should only produce one update a year instead of two, or criticising the very idea of Windows as a Service, is missing the point. The problem here isn't the release frequency. It's Microsoft's development process.
- jrockway: I think Kodak merely picked the wrong pivot. They thought they were a photography company, but they were actually a chemical company. There is plenty of demand for chemicals these days, and if they had stuck with that, they'd be doing fine.
- @ShortJared: Run "Your own AWS Lambda" the product says. Missing the point entirely. That's the last thing you ever want to do. I don't care which FaaS you use, just don't be the one running it.
- Chaslot: I realized personally that things were going wrong in 2011, when I was working at Google. I was working on this YouTube recommendation algorithm, and I realized that the algorithm was always giving you the same type of content. For instance, if I give you a video of a cat and you watch it, the algorithm thinks, Oh, he must really like cats. That creates these feeder bubbles where people just see one type of information. But when I notified my managers at Google and proposed a solution that would give a user more control so he could get out of the feeder bubble, they realized that this type of algorithm would not be very beneficial for watch time. They didn’t want to push that, because the entire business model is based on watch time.
- @joemckendrick: The New York Times eliminated five data centers and uses AWS and Google clouds. The Times uses cloud elasticity instead of having to architect its technology for peak season, which hits every four years on presidential election day.
- @BrianRoemmele: Posit: At 230 qubits (~10 years) it requires more numbers to describe how the Quantum Computer AI works than there are atoms in the Universe. A 1000 (~20 years) qubit computer gets unimaginably bigger, If not multiverses, then where does the computation happen? Takes my breath.
- Daniel Tunkelang: Deep learning for search feels like the new teenage sex: everyone talks about it, nobody really knows how to do it; everyone thinks everyone else is doing it; so everybody claims they’re doing it.
- @JeffDean: At Google, we've been getting a better understanding of issues of bias & fairness in machine learning models as we've used ML throughout more of our products. We've also created training for Google engineers on these topics, and we've now made this material available externally.
- Tim Carmody: it’s the Industrial Revolution that created — or was created by — this notion that machines could be made in parts that fit together so closely that they could be interchangeable. That’s what got our machine age going, which in turn enabled guns and cars and transistors and computers and every other thing.
- wenc: PipelineDB seems to do continuous aggregation, so the type of data it deals with is essentially summary data. If you know your summary function a priori, this can lead to very compact and efficient storage. The use case for this is reporting, dashboarding, etc. TimescaleDB on the other hand deals with raw data. This is useful if you have multiple parties needed different types of aggregation from the same raw data. Also, if you want to do any kind of machine learning, raw unaggregated data would typically be more useful.
- @alexandraerin: "Machine learning is like money laundering for bias."
- @Obdurodon: A surprisingly common form of hypocrisy: "Smash the nanny state! People should be free to make their own choices!" ...goes to work... "Every coder in the company must use exactly these tools, exactly this way, to write code exactly like this, for their own good!"
- Charlie Demerjian: For several years now SemiAccurate has been saying the the 10nm process as proposed by Intel would never be financially viable. Now we are hearing from trusted moles that the process is indeed dead and that is a good thing for Intel, if they had continued along their current path the disaster would have been untenable. Our moles are saying the deed has finally been done.
- @tsrandall: One note of caution: Tesla has a unique relationship with suppliers in which it pays them over several months after the sale of the car while Tesla accounts for $ immediately. During rapid production ramp, this creates a bit of accounting distortion that gradually catches up 7/
- @JeffDean: Julia + TPUs = fast and easily expressible ML computations!
- @stochastician: With Lambda and S3 we can get around that by dynamically allocating both compute and memory, and exploiting the insane S3 bandwidth (up to 500 GB/sec, not a typo) to be more efficient in terms of compute resources while being slightly slower to run end-to-end. Now your response might be like Dr Malcom's, and I'm sympathetic! But I think it's incredibly exciting to be pushing stateless-services (with backing stateful stores of incredible scale) in this direction, and I really do dream of a day of fully-elastic linear algebra. /4
- Science Daily: Scientists have taken part in research where the first molecule capable of remembering the direction of a magnetic above liquid nitrogen temperatures has been prepared and characterized. The results may be used in the future to massively increase the storage capacity of hard disks without increasing their physical size.
- Idan Ginsburg: However, given large enough survival lifetimes, even hypervelocity objects traveling at over 1000 km s−1 have a significant chance of capture, thereby increasing the likelihood of panspermia. Thus, we show that panspermia is not exclusively relegated to solar-system sized scales, and the entire Milky Way could potentially be exchanging biotic components across vast distances.
- hsaliak: gRPC lets you use other data exchange formats or IDLs as well, such as flatbuffers. However, the protobuf codegen experience we have spent most of the time and energy on. I see two sides to this - on one hand, there are folks who want a 'contract first' development experience, in which the service contracts are defined first, and the business logic is implemented later. gRPC lends itself to this model very well. Admittedly, this is also the way services are developed in Google. On the other hand, there are folks who want a model whereby you evolve a service and generate the specs from that service. Currently, this is not the experience that gRPC is optimized for.
- Hubble Telescope’s Broken Gyroscope Seemingly Fixed After Engineers Try Turning It Off and On Again. Embedded systems usually have a safe mode they can enter on a failure it doesn't know how to handle. And of course people are focussing on the reboot fixing things, but this is the interesting bit: "One of the older gyros failed after outlasting its lifetime by six months. But when the team tried to turn on a backup gyro [after the gyro had been off for more than 7.5 years], it didn’t function properly." Usually when you have an active-passive architecture you periodically test the passive component or alternate who is active or passive periodically. Wonder why it would dormant for so long?
- Great example of what disruption looks like. Why Kodak Died and Fujifilm Thrived: A Tale of Two Film Companies. At first there's no threat to an incumbent because there's no competition. Then "the market began shrinking very slowly, then picked up speed and finally plunged at the rate of twenty or thirty percent a year. In 2010, worldwide demand for photographic film had fallen to less than a tenth of what it had been only ten years before." But the story is not what you might think: In reality, Kodak failed for the same reason that Fujifilm succeeded: diversification. But for Kodak, it was the lack of diversification that condemned this firm to fade. Unlike Fujifilm which recognized early on that photography was a doomed business and tackled new markets with a completely different portfolio, Kodak made a wrong analysis and persisted in the decaying photo industry. Essentially, it’s not that Kodak didn’t want to change, it tried hard, but it did it wrong. Faced with a radical market disruption, it reacted energetically, but doing something and doing the right thing is different.
- When someone says something was created by AI that's not exactly true. A lot of people put a lot of effort into building the AI software. Is it ethical to make money directly off the open source work of others? Ask Robbie Barrat. This is a theory meets practicality situation. In theory when you open source your AI for creating art you're cool with other people using it. But in practice when someone takes your work, generates a crappy painting, and sells it for a lot of money at auction—its gotta hurt.
- The world is becoming more programmable, down to the device level. To understand that you need to know about all the cool low level networking things you can do with Linux. Netdevconf videos are available online. Then there's a great netdev 0x12 Software Gone Wild podcast episode. Lots of talk about AF_XDP, a new address family optimized for high performance packet processing and zero-copy semantics. It's a kernel bypass for access to the DMA ring. Useful for DDoS prevention tools. Control path is still going through kernel, works for any sort of device, can handle 100 gig line rate. There's BPF code offload into high-end NICs, or smart offload of tc policies into older NICs. BPF lets you put bytecode into the kernel to process packets. P4 does a similar thing, but it came out of the router world. P4 is a language that could compile down to BPF bytecode. We're seeing a hierarchy where the same code can run in user space, the kernel, or be offloaded onto a device. Linux is ready to be a high-end white box router, all we're waiting for is someone to do it and/or customers to demand it.
- Very detailed diagrams that really help you understand what's going on. Consistency without Clocks: The FaunaDB Distributed Transaction Protocol. Each transaction proceeds in three phases: the first phase is a speculative phase in which reads are performed as of a recent snapshot, and writes are buffered; Next, a consensus protocol is used (Raft) to insert the transaction into a distributed log; Finally, a check begins in each replica which verifies the speculative work. If that speculative work did not result in potential violations of serializability guarantees, then the work becomes permanent and the buffered writes written back to the database. Otherwise, the transaction is aborted and restarted. Paper. Summary. Good discussion on HN.
- Listen to Todd Montgomery, you'll get the message about how to deal with state in your service. Efficient Fault Tolerant Java with Aeron Clustering. We've been building services for years where state is stored in a database. The issues is fault tolerance of state, which lives on a spectrum from partitioned to replicated. Enterprise services typically send messages to queues. Messaging/queuing systems don't hold state, they just preserve messages for downstream systems. Messaging systems are non-deterministic, difficult to test, prone to failure. A better way is a continuous log with snapshot and replay. Most of our spacecraft use a mark and rollback system. They periodically checkpoint chunks of state so that if there's a crash you can restart at a known point and roll forward from there, reconstructing events. Now take that log and replicate it across machines in a cluster, what you have replicated state machines. Each replicated sevice sees the same event log, in the same order, and the log is replicated locally. Taking checkpoints of state allows you to roll up the log so you don't care about previous log events, you just care about the state. Raft Consensus is used to establish distributed checkpoints. This means the log is immutable, the log can be played, stopped, and replayed. Since each event is timestamped services can be restarted from snapshot and a log. What can you do with this? Easy to build distributed key/value stores, distributed timers, distributed locks. You can also build matching engines, order management, market surveillance, venue ticketing and reservations, auctions, chat, CQRS. A lot of these are based on the idea of removing database contention. Aeron provides all communications in a cluster. Consensus is based on Aeron stream position. Aeron has a simple API.
- The Internet Apologizes. Interesting conversation, but none of these people invented the internet. You can't blame a rickety house on those who poured a sound foundation. That's a you problem. How It Went Wrong, in 15 Steps: Start With Hippie Good Intentions (no, the internet was built to share scientific data); Then mix in capitalism on steroids (no, the internet was government funded); The arrival of Wall Streeters didn’t help; And we paid a high price for keeping it free; Everything was designed to be really, really addictive; At first, it worked — almost too well; No one from Silicon Valley was held accountable (it's called a conscience); Even as social networks became dangerous and toxic; And even as they invaded our privacy; Then came 2016; Employees are starting to revolt; To fix it, we’ll need a new business model; And some tough regulation; Maybe nothing will change; Unless, at the very least, some new people are in charge.
- Serverless Smart Radio (on-demand personalised audio delivery platform). Key idea is hybrid—use Serverless for what it's good for and containers for they are good for. Use containers for: dealing with large files; higher hardware resource requirements; high traffic loads; cases where having containers will be more cost efficient. They settles on a stack of S3, RDS, API Gateway, Step Functions, Lambda, ElasticCache, Kinesis, ECS Fargate; Cloud Formation, Elastic Transcoding.
- How do you capacity plan? Here's how Etsy does it: Squeeze testing is the capacity planning exercise of trying to see how much performance you can squeeze out of a service, usually by gradually increasing the amount of traffic it receives and how much it can handle before exhausting its resources. In the scenario of an established cluster this is often hard to do as we can’t arbitrarily add more traffic to the site. That’s why we turned the opposite dial and removed resources (i.e. servers) from a cluster until the cluster (almost) started to not serve in an appropriate manner anymore. So for our web and api clusters this meant removing nodes from the serving pools until they drop to about 25% idle CPU and noting the number of requests per second they are serving at this point. 20% idle CPU is a threshold on those tiers where we start to see performance decrease due to the rest of the CPU time being used for tasks like context switching and other non application workloads. That means stopping at 25% gives us headroom for some variance in this type of testing and also means we weren’t hurting actual site performance while doing the squeeze testing.
- Google GKE vs Azure AKS – Automation and Reliability: The time it takes to create a new cluster, deploy an application and test that it’s up and working on GKE is on average 3 minutes 50 seconds; For an identical test on AKS the average is 17 minutes 52 seconds; You can generally expect a GKE cluster delete to happen in under 3 minutes. The average for AKS is closer to 13 minutes; Average round trip time for create and destroy on Azure is approximately 30 minutes. On Google it’s 7 mins.
- The greatest fear of anyone relying on a remote service for local functionality: customers were unable to use the application to set or disarm their home’s internet-controlled security alarms. Yale Security Fail: 'Unexpected load' caused systems to crash, whacked our Smart Living Home app. I couldn't find anything on their architecture, but needless to say a system of that type must never disrupt user service because of load. You think I'm going to say it should be in the cloud. But I'm not. I'm going to say those services should be moved locally—into the house—as in a postcentralized architecture.
- The Alibaba Cloud is moving into Europe via London. No info on pricing or how it will be sold, but they are positioning themselves as vertical cloud experts: Ebay/Amazon-style e-commerce; payments; AWS-style business cloud workloads; and “fun”, broadly meaning horsepower for mobile gaming apps. Commenters were skeptical about trusting China with your data, but others think the Alibaba Cloud is well done and will be very useful for those doing business with China.
- Most projects will have some form of coding standards. Iceland goes one better, they have naming standards for everyone in the country. Allusionist 87. Name v. Law: Iceland has quite exacting laws about what its citizens can be named, and only around 4,000 names are on the officially approved list. If you want a name that deviates from that list, you have to send an application to the Icelandic Naming Committee, whose three members will decide whether or not you're allowed it.
- Let's be real. All code sucks. OOP, functional, logical, DNA, whatever. Code always gets complicated, messy, hard to understand, and hard to change. OOP Is Dead, Long Live OOP.
- Jepsen tortured MongoDB 3.6.4. The recommendation: Jepsen continues to recommend majority writes in all cases, and majority reads where linearizable is prohibitively expensive. Anything less than majority writes can lose data, and anything less than majority reads can read dirty data. MongoDB has discussed making servers reject write concerns and read levels below majority when using CC sessions, which might help. We recommended MongoDB update their documentation so users are aware of the requirements for using causal consistency, which was completed in September 2018.
- Guide to Serverless Technologies: The New Stack’s Guide to Serverless Technologies will help practitioners and business managers place these pros and cons into perspective by providing original research, context and insight around this quickly evolving technology.
- Alchemy: A Language and Compiler for Homomorphic Encryption Made easY: This work introduces Alchemy, a modular and extensible system that simplifies and accelerates the use of FHE. Alchemy compiles “in-the-clear” computations on plaintexts, written in a modular domain-specific language (DSL), into corresponding homomorphic computations on ciphertexts—with no special knowledge of FHE required of the programmer. The compiler automatically chooses (most of the) parameters by statically inferring ciphertext noise rates, generates keys and “key-switching hints,” schedules appropriate ciphertext “maintenance” operations, and more.
- Economics has always wanted to be a proper science. Looks like they hope machine learning will finally make that dream come true. Susan Athey: Machine-learned Economics.
- Unikernels as Processes (article): System virtualization (e.g., the virtual machine abstraction) has been established as the de facto standard form of isolation in multi-tenant clouds. More recently, unikernels have emerged as a way to reuse VM isolation while also being lightweight by eliminating the general purpose OS (e.g., Linux) from the VM. Instead, unikernels directly run the application (linked with a library OS) on the virtual hardware. In this paper, we show that unikernels do not actually require a virtual hardware abstraction, but can achieve similar levels of isolation when running as processes by leveraging existing kernel system call whitelisting mechanisms. Moreover, we show that running unikernels as processes reduces hardware requirements, enables the use of standard process debugging and management tooling, and improves the already impressive performance that unikernels exhibit.
- numpywren: Serverless Linear Algebra: We present numpywren, a system for linear algebra built on a serverless architecture. We also introduce LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. We show that, for certain linear algebra algorithms such as matrix multiply, singular value decomposition, and Cholesky decomposition, numpywren’s performance (completion time) is within 33% of ScaLAPACK, and its compute efficiency (total CPU-hours) is up to 240% better due to elasticity, while providing an easier to use interface and better fault tolerance. At the same time, we show that the inability of serverless runtimes to exploit locality across the cores in a machine fundamentally limits their network efficiency, which limits performance on other algorithms such as QR factorization