Stuff The Internet Says On Scalability For January 19th, 2018
Hey, it's HighScalability time:
If you like this sort of Stuff then please support me on Patreon. And I'd appreciate your recommending my new book—Explain the Cloud Like I'm 10—to anyone who needs to understand the cloud (who doesn't?). I think they'll like it. Now with twice the brightness and new chapters on Netflix and Cloud Computing.
- $268,895,000,000: Apple's cash and investments; 60%: growth in Amazon's ad revenue; ~80%: movie tickets sold in China are sold through mobile apps; £27,000: King Edward's yearly income; 3: new Google undersea cables; 7,500: Google edge caching nodes; 50,000x: microprocessor performance compared to a 1978 mini-computer at 0.25% of the cost; $15bn: spending on hosting services; 0.2 cycles per byte: ridiculously fast base64 encoding and decoding; $165B+: 2018 games software/hardware spending; 328 feet: air purification tower in China; 42 million: proteins molecules in a yeast cell;
- Quotable Quotes:
- Richard Jones: For now, what we can say is that the age of exponential growth of computer power is over. It gave us an extraordinary 40 years, but in our world all exponentials come to an end, and we’re now firmly in the final stage of the s-curve. So, until the next thing comes along, welcome to the linear age of innovation.
- mikekchar: Just talking out of the hole in my head right now, but I think the main thing is that programmers generally like to program. They like to have freedom to try approaches that they think will be successful. As much as possible, it's probably best to allow your teammates, whoever they are, to have the freedom that they need. At the same time you should expect to have your own freedom. Getting the balance is not easy and sometimes you have to change jobs in order to get into a team where you are allowed the freedom to do your best. In my experience, teams that take this seriously are the best teams to work on (even if they sometimes do stupid things).
- @jfbastien: It's been 0 days since C++ silently truncated a static constexpr 64-bit integer to 32-bits. Or has it been 4294967296 days? 🤔
- Ben Bajarin: Again, to reiterate this point, third parties used to market, and spend energy talking about their integration with iOS or support of iPhone/iPad with the same rigor they are now talking about Amazon’s Alexa. This can not be ignored.
- @pkedrosky: Quite a stat: “Amazon’s advertising revenue is growing 60% a year, according to estimates. Analysts peg it at $4.5B+ for 2018, and it's already larger than Twitter and Snapchat’s ad business. “ /v @CBinsights
- Mark Callaghan: [Meltdown] tl;dr - sysbench fileio throughput for ext4 drops by more than 20% from Linux 4.8 to 4.13
- @manisha72617183: My favorite quote, when reading up on Frequentists and Bayesians: A frequentist is a person whose long-run ambition is to be wrong 5% of the time. A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule
- @jasonlk: What I learned from 5 weeks in Beijing + Shanghai: - startup creation + velocity dwarfs anything in SF - no one in China I met is remotely worried about U.S. or possibly even cares - access to capital is crazy - scale feels about 20x of SF - endless energy - not SV jaded
- @barrelshifter: C++ is like C if C was also the Winchester Mystery House
- @sprague: "The Chinese government has punished a U.S. firm for the activities of a U.S.-based employee on a U.S.-based social media platform that is blocked in China and that U.S. firm acquiesced without a fight."
- @mtnygard: AWS Serverless Application Repository... am I reading correctly that there's no way for application publishers to make money from their work? It's 100% open source and users only pay Amazon for resource usage?
- Mike Elgan: It’s not that I’m bad at taking vacations. I’m just good at choosing an office.
- Jordan Novet: Amazon lost share in the public cloud business in the fourth quarter, while Microsoft continued to gain momentum, according to research from KeyBanc analysts. Amazon Web Services had 62 percent market share in the quarter, down from 68 percent a year earlier
- Horace Dediu: [App Store] Developer payment rate is now above $25 billion/yr. I’ve been notified via Twitter that this is higher than the revenue of McDonald’s Corporation in 2016.
- Mark Callaghan: This is my first performance report for the Meltdown patch using in-memory sysbench and a small server: the worst case overhead was ~5.5%; a typical overhead was ~2%; QPS was similar between the kernel with the Meltdown fix disabled and the old kernel;
the overhead with too much concurrency (8 clients) wasn't worse than than the overhead without too much concurrency (1 or 2 clients)
-
@dhh: Billionaire Valley VC drools over Chinese workaholism, their absence of time for fitness or seeing their kids, disinterest in debating equality. Calls Western sensibilities to such things “antiquated”. What a f*cking toad.
-
@esh: I just looked and TimerCheck.io usage has basically tripled in a year. It is on a rate to serve over 6 million API hits in January 2018, for around $25. Best part? I don't have to think about it ever and AWS scales everything: API Gateway, AWS Lambda, DynamoDB.
-
Tim Bray: I think Google has stopped indexing the older parts of the Web. I think I can prove it. Google’s competition is doing better.
-
wildbunny: I did a bunch of analysis on proof of burn, and came to the conclusion that it cannot work in practice because it relies on transactions in order to burn, which themselves are subject to consensus.
-
Saheli Roy Choudhury: A bitcoin conference has stopped taking bitcoin payments because they don't work well enough
-
Dave Cheney: Containers are eating the very Linux distribution market which enabled their creation.
-
Stream: The performance of Go greatly influenced our architecture in a positive way. With Python we often found ourselves delegating logic to the database layer purely for performance reasons. The high performance of Go gave us more flexibility in terms of architecture. This led to a huge simplification of our infrastructure and a dramatic improvement of latency. For instance, we saw a 10 to 1 reduction in web-server count thanks to the lower memory and CPU usage for the same number of requests.
-
@DynamicWebPaige: 1.4 billion log telemetry files queried in just about five seconds, with a simple SQL syntax. Trillions of log telemetry files - 4.5 petabytes of log telemetry - on @Azure, processed daily. (Just wait 'til you hear about @cosmosdb! 😉) #RedShirtDevTour
-
eBay: Our results show that document id ordering on category reduces mean latency per query by 27% and 99th percentile latency by 45%. Latency improvements are seen both with and without category constraints applied. It also reduces the index size by 3.2%.
-
Ben Fathi: By far the biggest problem with Windows releases, in my humble opinion, was the length of each release. On average, a release took about three years from inception to completion but only about six to nine months of that time was spent developing “new” code. The rest of the time was spent in integration, testing, alpha and beta periods — each lasting a few months.
-
Daniel Bryant: The Amazon DynamoDB team exposed the underlying DynamoDB change log as DynamoDB Streams (a Kinesis Data Stream), which provides building blocks for end-user engineers to more efficiently implement several architectural trends that had been identified.
-
@mmay3r: “Machine learning is a set of clever hacks.” This exposes a common misconception: that most technology isn’t layers of clever hacks.
-
@alastairgee: "At its peak size in the 1990s, the Mercury News had a newsroom staff of 440... The Mercury's newsroom currently has a staff of 39." 440 --> 39, at the largest paper in Silicon Valley. And now more cuts due at this and other local California papers.
-
daxfohl: Wow, I so strongly disagree with this. I'm currently working on a system backed by DocumentDB, and had a solution for the core of our application written out in a 13-line javascript stored procedure. The edict came down from above: no stored procedures because scalability. No analysis, no real-world data (we're currently doing a few transactions per day; my understanding is that stackoverflow uses SQL server and scales just fine), just idealism. So we set off to push our stored proc logic into a lazily-consistent background job. I hated the idea initially, but the result turned me off of eventual consistency permanently.
-
@MIT_CSAIL: #otd in 1986 a bunch of ARPANET coders founded the Internet Engineering Task Force to create standards for the Internet http://bit.ly/2CXTgis @IETF
-
David Gerard: This [bitcoin] crash didn’t just happen — it appears to have been provoked by a pile of fake sell walls, such as a large spoof order at $12,000 that disappeared as soon as the price approached it, and another at $11,000. If this isn’t our friend “Spoofy”, it s a close relative.
-
Leonid Bershidsky: If big data as used by Google and Facebook really helped manufacturers and retailers, retail sales in the countries where these companies are especially strong would have registered steep increases. Nothing of the sort happened. As Google and Facebook swelled, U.S. retail sales growth has been steady -- and below record levels.
-
bluesnowmonkey: This is something I think about a lot nowadays. You have to try and fail a lot to develop an intuition for good design. How do you give people the room to experiment without paying for all the mistakes? It is tremendously expensive to produce a senior developer yet vital to do so because junior developers mostly produce crap. We could only assign them projects off the critical path, but we can't really afford to have anyone working on anything but the critical path. Plus it's a lot to ask of someone's ego to say, here work on things that don't matter for 5 or 10 years and we'll talk.
-
Lars Doucet: Unfortunately, you can't just build a better mousetrap, because you'll be absolutely murdered by Steam's impenetrable network effects. Even if every aspect of your service is better than Steam's in every possible way, you're still up against the massive inertia of everybody already having huge libraries full of games on Steam. Their credit cards are registered on Steam, their friends all play on Steam, and most importantly, all the developers, and therefore all the games, are on Steam.
-
Stewart Brand: With an onscreen demonstration, Eagleman showed that “Time is actively constructed by the brain.“ His research has shown that there’s at least a 1/10-of-a-second lag between physical time and our subjective time, and the brain doesn’t guess ahead, it fills in behind. “Our perception of an event depends on what happens next.” In whole-body terms, we live a half-second in the past, which means that something which kills you quickly (like a sniper bullet to the head), you’ll never notice.
-
mdasen: It's really hard to compare database performance, but it looks like Spanner probably is at a pretty competitive price-performance point. Google advertises that a $0.90/hour node can do 10,000 reads per second and 2,000 writes per second. Amazon wrote a blog post comparing Aurora and PostgreSQL performance and got ~38,000 TPS on Aurora (vs. ~18,000 on PostgreSQL) on a m4.16xlarge, but that would cost $9.28/hour. Given that you could make a 10-node cluster for that price with 100,000 reads/second, it seems to be reasonably price competitive. Even if you were going to run instances yourself, a 16xlarge costs $4.256/hour and 4 spanner nodes seem like they'd be competitive with PostgreSQL (or even Aurora) on that hardware. When you're building your business, you make trade-offs. Spanner means high-uptime and throughput without worrying about ops and scaling. At my company, we run our own database clusters, but we also have teams of people working on them (Spanner and RDS didn't exist when the company started). Do you want to have to worry about how you'll partition data and make sure everything stays consistent? Do you want to have to write more complex code to accommodate for less consistency? That will mean you're slower to ship features and those features will be buggier. Do you want to hit scaling limits where there isn't a larger EC2 instance to put your database on? Do you want to have to deal with growing those instances? The issue with running your own cluster isn't when things are going well, but when things start going sideways.
-
Ben Fathi: The response [Windows], not surprisingly for a wildly successful platform, was to dig its heels in and keep incrementally improving the existing system — innovator’s dilemma in a nutshell. The more code we added, the more complexity we created, the larger the team got, the bigger the ecosystem, the harder it became to leapfrog the competition.
-
Ed Yong: So our neurons use a viral-like gene to transmit genetic information between each other in an oddly virus-like way that, until now, we had no idea about. “Why the hell do neurons want to do this?” Shepherd says. “We don’t know.” One wild possibility is that neurons are using Arc (and its cargo) to influence each other. One cell could use Arc to deliver RNA that changes the genes that are activated in a neighboring cell. Again, “that’s very similar to what a virus does—changing the state of a cell to make its own genes,”
-
reacharavindh: Our workloads are memory access bound. So the above points hit home. We're going to try AMD servers for the first time at this research group. If they do hold the promise, intel finally got some active competition in our realm!
-
Gleb Budman [Backblaze]: If ARM provides enough computing power at lower cost or lower power than x86, it would be a strong incentive for us to switch. If the fix for x86 results in a dramatically decreased level of performance, that might increasingly push in favor of switching to ARM.
-
Steve Souders: The biggest bottleneck in web performance today is CPU. Compared to seven years ago, there’s 5x more JavaScript downloaded on the top 1000 websites over the last seven years, and 3x more CSS. Half of web activity comes from mobile devices with a smaller CPU and limited battery power.
-
@dlenrow: Of late, everyone thinks everything cares about latency and needs edge compute. Cost per unit compute is cheaper at non-edge. So we get a giant sorting of latency-edge-sensitive cloud services (mobile and IoT aggregation) and those that run fine in cheap hyperscale DCs (SaaS) .
-
personjerry: My tl;dr understanding: Drones send video back to operator. Video is typically compressed so that it only updates part of the picture that have changed. Even if the video is encrypted, the researchers are able to measure the bitrate. Thus when the researchers make a significant change like putting a board against a window, and seeing if the traffic increases, the researchers can determine whether the drone is looking at that window.
-
@joeerl: The CRAY-1 was rated at 160 MIPS (and 5.5 tons) - The Raspberry Pi -C is 2441 MIPS and 42 gms (ie 15 x a CRAY-1 in integer compute speed, and 130,000 times lighter) - A super-duper computer. Erlang was developed on a VAX 11/750 (0.8 MIPS) (3000 times slower than a RP)
-
Ed Yong: This is part of a broader trend: Scientists have in recent years discovered several ways that animals have used the properties of virus-related genes to their evolutionary advantage. Gag moves genetic information between cells, so it’s perfect as the basis of a communication system.
-
OH: Systems are only as strong as their weakest surveillance system.
-
dwmkerr: Netflix are great at devops. Netfix do microservices. Therefore: If I do microservices, I am great at devops.
-
Jeff Jarvis: My new definition of journalism: convening communities into civil, informed, and productive conversation, reducing polarization and building trust through helping citizens find common ground in facts and understanding.
-
Michael B. Kelley: 'Very high level of confidence' Russia used Kaspersky software for devastating NSA leaks
-
ajb: Never underestimate the bandwidth of a virus full of RNA.
-
mattklein123: at Lyft we are aggressively moving some workloads to AWS C5 instances due to the fact that IBRS appears to run substantially faster on Skylake processors and the new Nitro hypervisor delivers interrupts directly to guests using SR-IOV and APICv, removing many virtual machine exits for IO heavy workloads
-
A Large Scale Study of Programming Languages: we report that language design does have a significant, but modest effect on software quality. Most notably, it does appear that strong typing is modestly better than weak typing, and among functional languages, static typing is also somewhat better than dynamic typing. We also find that functional languages are somewhat better than procedural languages. It is worth noting that these modest effects arising from language design are overwhelmingly dominated by the process factors such as project size, team size, and commit size
-
@TechCrunch: Lyft says nearly 250K of its passengers ditched a personal car in 2017 by @etherington
-
Elliot Forbes: The key thing to note is that legacy systems are only legacy because they’ve been successful enough to last this long.
-
allansson: Making it easy to create types is a requirement to make type systems useful. So it is not that type systems are bad, it is just that the all the major statically typed languages try their hardest to make you hate them.
-
@JennyBryan: One of the most useful things I’ve learned from hanging out with (much) better programmers: don’t wring hands and speculate. Work a small example that reveals, confirms, or eliminates something.
-
@ntjohnston: HBO goes for value vs volume: in 2017 Netflix spent $6 billion on 30 new original shows while HBO invested $2.7 billion on five shows. Result: Netflix ahead on revenue while HBO delivered more operating profit. via @WSJ
-
PM_ME_UR_OBSIDIAN: Type inference is really the best of both worlds. And, if you count lines of test code, there is absolutely no way it doesn't result in a smaller code base for the equivalent product.
-
Paul Baran: [The myth of the Arpanet - which still persists - is that it was developed to withstand nuclear strikes. That's wrong, isn't it?] Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.
-
Don Norman: Fake missile attack warning? Human error? Nonsense. It's incompetent design. One wrong click terrorizes the entire state? Why is it possible? I have a book they need to read.
-
pydry: I give a sh*t about loose coupling because that's what keeps my headaches in check. I've wasted far too much of my life already tracking down the source of bugs manifested by workflows that span 7 different services across 3 different languages.
-
dragontamer: The Google Machine-learning paper is an indicator of how slow RAM is, rather than anything else. RAM is so slow, that spending a ton of cycles doing machine-learning may (in some cases) be more efficient than accessing memory and being wrong.
-
SEA-Sysadmin: If you made this list of problems, and it included things like: teams that are huge and hard to plan for, problems like deployments being interdependent and blocking continuous delivery, difficulty prototyping potential new deployment mechanisms, being "stuck" with a language you can't hire for because of a legacy code base, etc. then you might find that microservices are the solution to your problem!
-
nuand: The "fix" Intel pushed out this week is a microcode update that in my experience doesn't fix or address Meltdown at all. The update does however make Spectre slightly less reliable, so I'm going to assume that the microcode update has something to do with fixing, updating, or adding new controls to the branch predictor buffer.
-
Alan Watts~ Life is not a journey
-
Andrew Hynes: So we’ve got these types that act as self-documenting proofs that functionality works, add clarity, add confidence our code works as well as runs. And, more than that, they make sense. Why didn’t we have these before? The short answer is, they’re a new concept, they’re not in every language, a large amount of people don’t know they exist or that this is even possible.
-
Robert Knight: These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output. It's the glue of cognition.
-
Marcel Weiher: Meltdown patch reduces mkfile(8) throughput to less than 1/3 on macOS.
-
PM_ME_UR_OBSIDIAN: I would submit that statically-typed languages do reduce the single-pass rate of defects, as in the number of defects in a piece of code before you've starter to test it or debug it. The key to reconciling this hypothesis with the above statistic is the idea that, while programmers may not catch the individual defects effortlessly, we develop an excellent intuition for how many defects are in a piece of code we've written. So we will iterate on a piece of code until we are confident that it is defective only to a tolerable level, which (holding everything else equal) is invariant across languages. From a business perspective, the comparative advantage of statically-typed languages is then in the number of such iterations before the code is in a shippable state. At one extreme, when I write a tightly specified piece of Coq, I expect to never have to revise it. Meanwhile, when I write JavaScript I feel like a sculptor; every strike of the chisel is an iteration. Compile, test, modify, repeat.
-
CyclonusRIP: I'm on a team of 7 with close to 100 services. But they don't really talk to each other. For the most part they all just access the same database, so they all depend on all the tables looking a certain way. I keep trying to tell everyone it's crazy. I brought up that a service should really own it's own data, so we shouldn't really have all these services depending on the same tables. In response one of the guys who has been there forever and created this whole mess was like, 'what so we should just have all 100 services making API calls to each other for every little thing? That'd be ridiculous.' And I'm sitting there thinking, ya that would be ridiculous, that's why you don't deploy 100 services in the first place.
-
dragontamer: A few notes for those who aren't up-to-date with the latest architectures: 1. Intel machines can perform 2x 256-bit loads per clock cycle, but ONLY using AVX, AVX2, or AVX512 instructions. Two-loads per-clock can be sustained through the L1 and L2 cache, only slowing down at the L3 cache. Latency characteristics differ of course between L1 and L2 levels. 2. Most AVX2 instructions have superb numbers: executing in a single clock or even super-scalar execution per clock. Skylake supports 2x 256-bit comparisons per clock cycle (Simultaneously, with the two loads. Ports 0 and 1 can do 256-bit comparisons each, while port 2 and 3 can do loads. Intel Skylake will still have ports 4, 5, and 6 open as well to perform more tasks) So effectively, the code checks ~16 bucket-locations of a Cuckoo Hash in roughly the same speed as the total-latency of a DDR4 RAM Access. In fact, their implementation of ~36ns is damn close to the total latency of the Skylake system as a whole. The implementation is likely memory-controller bottlenecked and can't be much faster.
-
lovich: In my companies where we used the cloud it just came down to internal pragmatism on our part. If we wanted to hire someone that could take over a year between convincing management to allocate the budget for it and going through HR's hiring process. The cloud was something where the recurring monthly costs were small enough to be expensed. We were given a single IT contractor who worked at multiple sites for the company and would show up 1-2 weeks after you notified him of an issue. So we ended up using the cloud, the company has a giant liability now that everything is dependent on the cloud providers, but management ends up happy because they only see 4-5 figure charges per month and not a 6 figure cost per year they'd see for a new employee. I'm pretty sure if the company's budgets reported employees as monthly costs we could have just gotten another employee, but there's not much you can do when the decisions are made that far above your head
-
Krysta Svore: with quantum computing you’re really relying on the principles of quantum mechanics to compute. And these principles are just so vastly different than what we have classically. Things like superposition, entanglement, interference, you know, these terms – if anyone’s listened to Feynman, Richard Feynman and his lectures or studied Einstein’s notes and papers you would’ve come across these terms. So, entanglement being this amazing ability to have two systems that can be correlated over you know universes apart. That when you do one to one thing it automatically, instantaneously does one to another. That’s a pretty incredible property. And then things like superposition. So, when I take a quantum computer, I rely on superposition to store the information. What that means is instead of just having a bit in my classical computer, my classical computer of course is just binary. It’s on or off. It’s a switch. In a quantum computer, I like to think of it a little bit like a dimmer switch, right? Your information is actually stored in a quantum state. And that quantum state can take on both 0 and 1. It takes on basically a linear combination of 0 and 1. And this allows you to then scale up and achieve some of these amazing, exponential improvements with quantum computing.
- All algorithms are not created equal. Robots skittering over vast warehouse floors, fulfilling lifeless consumer items out of plastic bins, is very different than making a store for humans feel like the very incarnation of mother nature's bounty. Amazon appears to be good at one of them. Can Amazon learn? Or is creating an eden for grocery shoppers simply outside Amazon's corporate DNA? 'Entire aisles are empty': Whole Foods employees reveal why stores are facing a crisis of food shortages: Order-to-shelf, or OTS, is a tightly controlled system designed to streamline and track product purchases, displays, storage, and sales. Under OTS, employees largely bypass stock rooms and carry products directly from delivery trucks to store shelves. It is meant to help Whole Foods cut costs, better manage inventory, reduce waste, and clear out storage. But its strict procedures are leading to storewide stocking issues, according to several employees. Angry responses from customers are crushing morale, they say.
- Yes, it's from a competing service, but some points are worth considering. The hidden costs of serverless. The most interesting point is that Lambda functions came in at just under 2% of his total AWS cost. You have to look at total costs, including the bevy of support services like Route 53, SNS, API Gateway, Storage, Network, CloudTrail, etc. It all adds up. @swardley: The quote "the move to Serverless is more or less inevitable" is spot on. There's always a transition of "what does X really cost". Many used this to argue extensively in 2007 why on premise would be more efficient than EC2
- We can only hope. The Death of Microservice Madness in 2018: There are many cases where great efforts have been made to adopt microservice patterns without necessarily understanding how the costs and benefits will apply to the specifics of the problem at hand.
- Is the Oracle Cloud for real? Yes, but it's not AWS. Here's a good human level explanation: The Cloudcast #330 - Oracle’s Next-Generation Cloud IaaS. The Oracle Cloud targets enterprise lift and shifters rather than green field projects. That's why they went with bare-metal from the start, not for speed reasons, but so customers could install all their legacy customizations while still getting that minty managed by someone else cloud feeling. They include L2 support so features using gratuitous ARP, for example, will still work and would not need to be rewritten. They make it so you don't have to fail over to a new host when a disk or memory goes bad, it can be fixed in-place. The goal is to enable moving any workload to the cloud by making these kind of legacy preserving decisions at every layer of the stack, while at the same time enabling customers to use cloud native features for new services. The team started with 5 people 3.5 years ago, now there are over 1,500 people in downtown Seattle. Their network let's you get line rate between any two hosts in the datacenter without rearchitecting your application. A custom network links all three availability zones within a region. Goal is less than 1 msec latency between AZs. Regions are connected by a global backbone. Each region is a lot of internet transit available. All network traffic is encrypted. The network is virtualized so you can do all sorts of fancy networking things.
- The machine hasn't won yet. Don't Throw Out Your Algorithms Book Just Yet: Classical Data Structures That Can Outperform Learned Indexes: Does all this mean that learned indexes are a bad idea? Not at all: the paper makes a great observation that, when cycles are cheap relative to memory accesses, compute-intensive function approximations can be beneficial for lookups, and ML models may be better at approximating some functions than existing data structures. The idea of self-tuning data structures is also exciting. Our main message, though, is that there are many other ideas that tackle the indexing problem in creative ways, and we should make sure to take advantage of these insights, too. In the case of hashing, cuckoo hash tables are asymptotically better due to a deep algorithmic insight, the power of two choices, that greatly improves load balance. In other domains, ideas such as adaptive indexing, database cracking, perfect hashing, data-dependent hashing, function approximation and sketches also provide powerful tools to system designers, e.g., by adapting to the query distribution in addition to the data distribution. It will be exciting to compare and combine these ideas with tools from machine learning and the latest hardware.
- Brendan Gregg on How To Measure the Working Set Size on Linux: I've seen other ways to estimate the working set size, although each has its own caveats. What I haven't seen is a ready-baked tool for doing WSS estimation. This is what motivated me to write my wss tools, based on Linux referenced and idle page flags, although each tool has its own caveats described in this post (although not as bad as other approaches).
- As you can tell from performance charts, the new baseline for system performance has been significantly altered by the mitigation patches for Meltdown. Visualizing Meltdown on AWS: when we rebooted our PV instances on December 20, ahead of the maintenance date, we saw CPU jumps of roughly 25%...The patch rollout impacted pretty much every tier in our platform, including our EC2 infrastructure and AWS managed services (RDS, Elasticache, VPN Gateway)...On the same Kafka cluster as above, we saw the packet rate drop up to 40% when the patches were deployed...The Cassandra tiers that we use for TSDB storage were also impacted across the board. We saw CPU spikes of roughly 25% CPU on m4.2xlarge instances and similar spikes on other instance types...One internal tier, which interacts with Cassandra, saw a 45% jump in p99 latency committing records to Cassandra...We also detected spikes in latency on AWS managed services, like AWS Elasticache Memcached. This snapshot shows an 8% bump in CPU on a given memcached, but that was almost a 100% increase...
- As someone who has done a lot of embedded programming, the points about not wanting to ever change code and the impossibility of concurrency are way over blown, but Rust does sound interesting. Why Rust is the future of robotics: If [Rust] compiles it is safe. Let me repeat that. If it compiles it is safe...Memory safety — Rust does not allow null pointers or dangling pointers...Painless concurrency — With concept like borrowing, Rust can track if there is a risk of data race and simply won’t compile...Zero-cost abstraction ...A modern syntax...Precise error messages... Painless packaging and dependency management ...Rust brings confidence back, in your code, but also in the code of others you might want to reuse.
- Stream & Go: News Feeds for Over 300 Million End Users: After years of optimizing our existing feed technology we decided to make a larger leap with 2.0 of Stream. While the first iteration of Stream was powered by Python and Cassandra, for Stream 2.0 of our infrastructure we switched to Go. The main reason why we switched from Python to Go is performance. Certain features of Stream such as aggregation, ranking and serialization were very difficult to speed up using Python. We’ve been using Go since March 2017 and it’s been a great experience so far. Go has greatly increased the productivity of our development team. Not only has it improved the speed at which we develop, it’s also 30x faster for many components of Stream.
- Scaling Kubernetes to 2,500 Nodes. It took a lot of debugging, fixing, and tuning—as these things usually do—but it can be done. Next stop? 5000.
- Good example of using Step Functions in real-life. Revitalize Gilt City's Order Processing with Serverless Architecture: It is never easy to rewrite (or replace) a mission critical system. In our case, we have to keep the existing monolithic Ruby on Rails app running while spinning up a new pipeline. We took the strangler pattern (see this Martin Fowler article for an explanation) and built a new API layer for processing individual orders around the existing batch-processing, job-based system in the same Rails app. With this approach, the legacy job-based system gradually receives less traffic and becomes a fallback safety net to catch and retry failed orders from the instant processing pipeline...The new instant order pipeline starts with the checkout system publishing a notification to an SNS topic whenever it creates an order object. An order notification contains the order ID to allow event subscribers to look up the order object in the order key-value store. An AWS Lambda application order-notification-dispatcher subscribes to this SNS topic and kicks off the processing by invoking an AWS Step Functions resource. See below a simplified architecture diagram of the order processing system. The architecture leverages Lambda and Step Functions from the AWS Serverless suite to build several key components. At HBC, different teams have started embracing a serverless paradigm to build production applications...AWS Lambda’s versioning feature provides the ability to make Lambda functions immutable by taking a snapshot of the function (aka publishing a version)...We make the order-notification-dispatcher query our a/b test engine to have simple routing logic for each order notification, so that it can shift traffic to either the blue/green Step Function stack according to test/control group the order falls into...From our development exerience using AWS Step Functions we discovered some limitations of this service. First of all, it lacks of a feature like a Map state which would take a list of input objects and transform it to another list of result objects.
- How has Linux become as good as any proprietary networking stack? Software Gone Wild with an excellent deep dive into Packet Forwarding on Linux. Also, VRF for Linux — a contribution to the Linux Kernel.
- Scale Your Web Application — One Step at a Time: Step 1: Ease server load; Step 2: Reduce read load by adding more read replicas; Step 3: Reduce write request; Step 4: Introduce a more robust caching engine; Step 5: Scale your server.
- Nicely done. Meltdown and Spectre, explained: It is very rare that a research result fundamentally changes how computers are built and run. Meltdown and Spectre have done just that. These findings will alter hardware and software design substantially over the next 7–10 years (the next CPU hardware cycle) as designers take into account the new reality of the possibilities of data leakage via cache side-channels. In the meantime, the Meltdown and Spectre findings and associated mitigations will have substantial implications for computer users for years to come. In the near-term, the mitigations will have a performance impact that may be substantial depending on the workload and specific hardware.
- Bill Gates thinks these six innovations could change the world: Better Vaccine Storage; Gene Editing; Solar Fuel; mRNA Vaccines; Improved Drug Delivery; Artificial Intelligence.
- There’s a general conception that EC2 is faster, cheaper, and easier than hosting your own hardware. Scaling SQLite to 4M QPS on a Single Server (EC2 vs Bare Metal): Maybe I’m old school, but I’ve never quite subscribed to that notion. The best price you can possibly get on an EC2 server is to prepay for a year with a 3 year commitment, but the price you still pay on day one is equal to the cost of the hardware...At the time of this writing, the largest EC2 instance you can possibly buy is the x1e.32xlarge, with 128 “vCPUs” and 4TB of RAM. It costs $26.688 per hour, meaning $233K/yr — for a single server. (If you commit to 3 years and prepay for 1, you can get it for “only” $350K for 3 years.) Here’s how that server compares to what you can host yourself at a fraction of both the up front and ongoing cost:...The orange line shows total aggregate performance of the EC2 box capping out around 1.5M queries per second. The blue line shows the same test on a “bare metal” machine, which gets upwards of 4M queries per second and keeps on climbing for the duration of the test...Not all EC2 virtual CPUs are the same, and none of them are remotely as powerful as an actual CPU you host yourself...It turns out a major hidden advantage of hosting your own hardware is it means you can configure the BIOS to suit your own needs, and in our particular case those changes were everything.
- Yes, it's self-serving, but if you're thinking about going eventually consistent or transactional, then it's a good article to inform your thinking. Why you should pick strong consistency, whenever possible. dgacmu: I don't believe it's religion - it's experience. Google, like Amazon, initially started building scale-out systems by sacrificing strong consistency and transactions. Amazon did it with Dynamo, and Google did it with BigTable. Over time - and with very substantial engineering investment - Google has started walking that back, first with Megastore, and now with Spanner. What we're seeing happen is a reinvention of a lot of classical transactional systems from a "massive scale-first" perspective instead of a "local transactions on spinning disk fast" perspective. The eventually and causally-consistent systems have something to add, but I don't think it's wise to discount Google's years of engineering experience in this as religion. Rather, it's reflective of starting to operate at a number of engineers scale at which it has become worthwhile investing very dedicated engineering and research effort into supporting a transactional model, at scale, so that thousands of other engineers can be more productive. Another way to summarize it is: At some point, you're going to have to fix problems at the application and algorithm level, and not just hope that the underlying storage system makes everything magic. It's easier when those problems are performance problems than when they're correctness/consistency problems.
- Stateless Service Gotchas: resource handles; session data; ordering; serial identity; write conflicts; Authentication and Authorization.
- It's all about latency variance. I’m afraid you’re thinking about AWS Lambda cold starts all wrong: Cold start happens once for each concurrent execution of your function...What if the user requests came in droves instead?...All of a sudden, things don’t look quite as rosy — the first 10 requests were all cold starts! This could spell trouble if your traffic pattern is highly bursty around specific times of the day or specific events...These are the most crucial periods for your business, and precisely when you want your service to be at its best behaviour...you could consider reducing the impact of cold starts by reducing the length of cold starts...authoring your Lambda functions in a language that doesn’t incur a high cold start time — i.e. Node.js, Python, or Go...choose a higher memory setting for functions on the critical path of handling user requests...optimizing your function’s dependencies, and package size...stay as far away from VPCs as you possibly can...For [seldom used] APIs, you can have a cron job that runs every 5–10 mins and pings the API...It’s important not to let our own preference blind us from what’s important, which is to keep our users happy and build a product that they would want to keep on using.
- Great overview of developing on AWS using their Cloud IDE. Welcome to Cloud-Native Development with AWS Cloud9 & AWS CodeStar. Off all the lock-in fears we've ever talked about, this is real lock-in.
- FernandoMiguel/kb: Setting up user accounts can be cumbersome. With the multitude of services available on AWS and the ease of creating new AWS accounts, managing all those user accounts can be a lot of overhead.
- vmware/dispatch: is a framework for deploying and managing serverless style applications. The intent is a framework which enables developers to build applications which are defined by functions which handle business logic and services which provide all other functionality
- twitchtv/twirp: a framework for service-to-service communication emphasizing simplicity and minimalism. It generates routing and serialization from API definition files and lets you focus on your application's logic instead of thinking about folderol like HTTP methods and paths and JSON.
- Murat on The Lambda and the Kappa Architectures: Lambda, from Nathan Marz, is the multitool solution. There is a batch computing layer, and on top there is a fast serving layer. The batch layer provides the "stale" truth, in contrast, the realtime results are fast, but approximate and transient...Kappa, from Jay Kreps, is the "one tool fits all" solution. The Kafka log streaming platform considers everything as a stream. Batch processing is simply streaming through historic data. Table is merely the cache of the latest value of each key in the log and the log is a record of each update to the table. Kafka streams adds the table abstraction as a first-class citizen, implemented as compacted topics...Right now, integration is a bigger pain point, so the pendulum is now on the one-tool solution side...Later, when efficiency becomes a bigger pain point, the pendulum will swing back to the multi-tool solution, again...The pendulum will keep swinging back and forth because there cannot be a best of both worlds solution.
- Mechanical Computing Systems Using Only Links and Rotary Joints: A new paradigm for mechanical computing is demonstrated that requires only two basic parts, links and rotary joints. These basic parts are combined into two main higher level structures, locks and balances, and suffice to create all necessary combinatorial and sequential logic required for a Turing-complete computational system. While working systems have yet to be implemented using this new paradigm, the mechanical simplicity of the systems described may lend themselves better to, e.g., microfabrication, than previous mechanical computing designs. Additionally, simulations indicate that if molecular-scale implementations could be realized, they would be far more energy-efficient than conventional electronic computers.
- RAND Classics. Nothing after 2007. A lot of early networking papers by the great Paul Baran.
- A Scalable Distributed Spatial Index for the Internet-of-Things: In this paper, we propose Sift, a distributed spatial index and its implementation. Unlike systems that depend on load balancing mechanisms that kick-in post ingestion, Sift tries to distribute the incoming data along the distributed structure at indexing time and thus incurs minimal rebalancing overhead. Sift depends only on an underlying key-value store, hence is implementable in many existing big data stores. Our evaluations of Sift on a popular open source data store show promising results—Sift achieves up to 8× reduction in indexing overhead while simultaneously reducing the query latency and index size by over 2× and 3× respectively, in a distributed environment compared to the state-of-the-art.
- Rules of Machine Learning: Best Practices for ML Engineering: This document is intended to help those with a basic knowledge of machine learning get the benefit of best practices in machine learning from around Google. It presents a style for machine learning, similar to the Google C++ Style Guide and other popular guides to practical programming. If you have taken a class in machine learning, or built or worked on a machinelearned model, then you have the necessary background to read this document.