Stuff The Internet Says On Scalability For January 13th, 2017
Hey, it's HighScalability time:
So you think you're early to market! The Man Who Invented VR Goggles 50 Years Too Soon
If you like this sort of Stuff then please support me on Patreon.
- 99.9: Percent PCs cheaper than in 1980; 300x20 miles: California megaflood; 7.5 million: articles published on Medium; 1 million: Amazon paid eBook downloads per day; 121: pages on P vs. NP; 79%: Americans use Facebook; 1,600: SpaceX satellites to fund a city on Mars;
- Quotable Quotes:
- @GossiTheDog: How corporate security works: A) buy a firewall B) add a rule allowing all traffic C) the end How corporate security works:A) buy a firewall B) add a rule allowing all traffic C) the end
- @caitie: Distributed Systems PSA: your regular reminder that the operational cost of a system should be included & considered when designing a system
- @jimpjorps: 1998: the internet means you can "telecommute" to a tech job from anywhere on Earth 2017: everyone works in the same one square mile of SF
- Jessi Hempel: [re: BitTorrent] Perhaps the lesson here is that sometimes technologies are not products. And they’re not companies. They’re just damn good technologies.
- giltene: My new pet peeve: "how to make X faster: do less of X" recommendations.
- peterwwillis: It used to be you had to actually break into a system to exfiltrate all its data. Now you just make an HTTP query.
- Laralyn McWillams: Identify problems but focus on solutions. If you become more about problems than solutions, that negativity infects your work, your team, and how you think about your career.
- Chris Fox: Apple is 100% a boutique retailer, meaning that a human chooses which books to promote. Without that, there was no organic discovery tool where readers could find your book.
- vytah: In fact, the 1986 [Chernobyl] disaster happened because the engineers decided to get rid of safeguards and run tests.
- Eric Elliott: Breaking into a user’s top 5 apps is like getting struck by lightning or winning the lottery. Don’t bank on it.
- Peter: I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of.
- SEJeff: LVS is pretty much the undisputed king for serious business load balancing. I've heard (anecdotally) that Uber uses gorb[1] and google has released seesaw, which are both fancy wrappers ontop of LVS for load balancing.
- k__: I have the feeling this is haunting my life. Jobs, relationships, everything. When I got something, it didn't feel that hard to get it. When I try to get something it feels impossible.
- Nelson Elhage: One of my favorite concepts when thinking about instrumenting a system to understand its overall performance and capacity is what I call “time utilization”. By this I mean: If you look at the behavior of a thread over some window of time, what fraction of its time is spent in each “kind” of work that it does?
- Bart Sano (Google): I can say that we are committed to the choice of these different architectures, including X86 – and that includes AMD – as well as Power and ARM. The principle that we are investing in heavily is that competition breeds innovation,
- aaron-lebo: This is a larger issue with developer burnout I suspect. You master one thing and there's someone standing on the corner saying..."well, actually, I've got something better" and there's a very real anxiety in that evaluation process. Does object-oriented programming suck? Are functional languages the future? Do you really want an SPA? Should you replace your C codebase with Rust... or Go? Is Bitcoin worth getting in on? etc etc
- StorageMojo: [re: Violin’s bankruptcy] The race is not always to the swift, nor riches to the wise. By starting with software, other companies built an early lead, and now have the money and time to optimize hardware for flash.
- nocarrier: [Why no datacenters in India?] Cost was a smaller factor than politics; the Indian government wanted the private keys for our certs in order to let FB put a POP there. That was an absolute dealbreaker, so we served India from Singapore and other POPs in nearby countries.
- RDX: So that original post, although long and full of real examples, was not about Javascript fatigue really. Its change fatigue. Let’s be clear, if you’re picking something new, you’re making a conscious choice to grow up with it.
- @jamesurquhart: Amazing that emergent tech that’ll revolutionize software dev is already almost a commodity utility service. #streaming #serverless #events
- The Ethics of Autonomous Cars. The obvious revenue model is highest bidder lives. During the first few milliseconds of a crash response a real-time bidding session is created and the lowest bidder assumes the risk. That at least captures the zeitgeist of the times.
- First Go. Now poker. DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker. Thank the force humans are still unbeatable at Sabacc.
- Medium may be the first YA (Young Adult, think Hunger Games) style publishing outlet. YA is often written in first-person present. It's a good way to fake authenticity. Traditional publications use third-person past tense, but that's not what works best on Medium. What I learned from analyzing the top 252 Medium stories of 2016: The words “you” and “I” were by far the most common, which suggests that addressing the reader directly as an individual person is a better writing strategy than writing in third person.
- Ben Kehoe says AWS Step Functions is not the cheap, high-scale state machines using an event-driven paradigm he has been looking for. FaaS is stateless, and AWS Step Functions provides state as-a-Service: at $0.025 per 1,000 executions, it’s 125 times more expensive per invocation than Lambda; it’s not going to be cost-effective to replace existing roll-your-own Lambda solutions; the default throttling limit for a state machine is two executions per second...it’s not built to handle massively scaled but transient event scheduling.
- Ransomware has shifted to being a reproducible strategy. @SteveD3: Since I fist covered the MongoDB hacking on Jan 3, the number of compromised DBs has surpassed 32,000. Now possibly Elasticsearch. Anything you can find basically with Shodan. Which is why we now have @GossiTheDog: Found out today firms have started doing legal contracts which specifically rule out liability if they get hit by ransomware, naming it.
- 79d697i6fdif: Scaling app servers to nearly unlimited size is easy to explain but really hard in practice. It basically amounts to this: 1) Balance requests using DNS anycast so you can spread load before it hits your servers 2) Setup "Head End" machines with as large pipes as possible (40Gbps?) and load balance at the lowest layer you can. Balance at IP level using IPVS and direct server return. A single reasonable machine can handle a 40Gbps pipe.... 3) Setup a ton of HTTP-proxy type load balancers. This includes Nginx, Varnish, Haproxy etc... One of these machines can probably handle 1-5 Gbps of traffic so expect 20 or so behind each layer 3 balancer...4) Now for your app servers. Depending on if you're using a dog slow language or not, you'll want between 3 and 300 app servers behind each HTTP proxy.
- Typical awesome detailed and informative post on How Stack Overflow plans to survive the next DNS attack. The key idea is implementing multiple DNS providers. They chose Route 53 and Google Cloud, using four nameservers – two from each provider, with a policy of pulling a failed DNS provider out of rotation as soon as possible to provide best performance.
- Algorithms can achieve anti-competitive collusion, just as Adam Smith said. From Policing the digital cartels: As an example, he cites a German software application that tracks petrol-pump prices. Preliminary results suggest that the app discourages price-cutting by retailers, keeping prices higher than they otherwise would have been. As the algorithm instantly detects a petrol station price cut, allowing competitors to match the new price before consumers can shift to the discounter, there is no incentive for any vendor to cut in the first place.
- Excellent and free online book. Neural Networks and Deep Learning: Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data; Deep learning, a powerful set of techniques for learning in neural networks. For a deeper and more expensive treatment: Deep Learning (Adaptive Computation and Machine Learning series).
- And the baloons have it. Alphabet dropped its plan for solar-powered internet drones. There was the Titan drone crash; problems with both transmitting 5G data as well as a low budget; a balloon can stay relatively stationary compared to a drone aircraft, and shouldn't be as likely to fail since there are fewer parts to break.
- If you like this video of monkeys mourning their adopted robot baby you may like Harry F. Harlow's Monkey Love Experiments. There's a reason love stories last through the ages. It just tears at your heart.
- Flickr has reduced it's storage usage by 50% since 2013 and didn't need to purchase any new storage in 2016. A Year Without a Byte. Flickr estimates storage on a service like S3 would cost over $250 million per year (200 million users @ $1.25/year). Though per byte costs are decreasing the number of bytes being stored is increasing faster than prices are reducing. One major source of savings was taking a look at their threshold settings. It turns out people don't often change or delete pics once uploaded so space reserved for those purposes could be reclaimed. Dynamic generation of thumbnail sizes and perceptual compression decreased thumbnail storage requirements by 65%. As always compression needs to be balanced against increase CPU and latency. Only about 3% of images are duplicates so deduplication isn't a big win. Another strategy is to delete thumbnails in two datacenters for infrequently used images.
- Tweaking DynamoDB Tables for Fun and Profit. Reducing the cost for storing 238 TB at $60,100/month to 7.13 TB and $1820/month. The techniques are not obvious. This stunning result was a combination of: conditional writes, limiting index size, using sets, aging out unneeded data, structuring data to be under the 1KB write limit.
- The cool kids are using react native, should you? The Cost of Native Mobile App Development is Too Damn High!: A tipping point has been reached. With the exception of a few unique use cases, it no longer makes sense to build and maintain your mobile applications using native frameworks and native development teams...With React Native you can have a single engineer or team of engineers specialize in cross platform mobile app development...It is not unheard of for React Native apps to reuse up to 90% of their code across platforms, though the range is usually between 80% and 90%...There is no need for compilation with React Native, as the app updates instantly when saving, also speeding up development time...the most innovative and largest technology companies in the world are betting big on these types of technologies. See also Updated recap on front-end dev in 2016, Thanks everyone for the feedback. Very helpful.
- A familiar software process as well. A Break in the Search for the Origin of Complex Life: Once the eukaryotes evolved, they repeatedly engulf microbes and fused with them—a process called endosymbiosis.
- eBay discarded job-oriented analytical solutions using Hadoop/HDFS for highly available distributed 100 TB (compressed to 3.6 TB) in-memory columnar store along the lines of Google PowerDrill and Druid. It's called Portico and runs seller anyalytics to the tune of daily 4 million requests a day over OpenStack. A near-real-time data system ingests new and updated listings, purchases, small-window behavioral aggregates from Kafka streams. Additional replica sets are configured automatically be time zone as load increases. The number of nodes within each cluster is determined by total heap and mapped memory needed to hold the dataset. Memory-mapped files are paged from disk by the OS. Most sellers focus their businesses on a small subset of trending categories such as Fashion, Electronics, and DVDs. Over 80% of the queries are sent using default the 30-day time window, so caching really helps.
- Bitpacking and Compression of Sparse Datasets: The clear winner was fully packing bits, followed by gzip (compression level 6). This yields a 6x smaller file, 28x faster than my original gzip implementation. The overall runtime dropped from 174 seconds to 25 seconds - a 7x speedup. Compression and writing is now so fast that there's no point in further optimizing it. Instead, my data processing code is now the slow part.
- How to avoid latency spikes and memory consumption spikes during snapshotting in an in-memory database. 1) lock the whole database in advance, dump it to a file and unlock it. 2) use system copy-on-write mechanism provided by fork. 3) implement our own COW that will copy-on-write only the pieces of memory that were actually changed? More specifically, only the values that were changed...the main difference between this MVCC-like COW and the system COW is that we “cow” not the whole 4Kb page, but only a small piece of data that is actually changing.
- Goroutines, Nonblocking I/O, And Memory Usage: Unlike with C, in Go there is no way to know if a connection is readable, other than to actually try to read data from it. This means that at the minimum a Go proxy with a large number of mostly idle connections will churn through a large amount of virtual memory space, and likely incur a large RSS memory footprint over time as well.
- rthrfrd: FWIW we went through a very similar process to that documented here by Github (~3 months ago). It was entirely due to operational reasons and nothing to with shortcomings in Redis itself. MySQL was the master record for 99% of our data while Redis was the master record for the other 1% (as it happens it was also a kind of activity stream). Having the single 'master' reference for our data reduced complexity to a degree that it was worth running a less computationally-efficient setup. We also have nowhere near Github's volume so we did not have to do such significant re-architecting to make unification possible. Now we still use Redis for reading the activity streams and as LRU cache for all sorts of data, but it is populated like all of our specialised slave-read systems (elasticsearch, etc) by replicating from the MySQL log.
- Disruption from the bottom. Serverless Functions for Kubernetes.
- jhgg: We've recently had to move away from redis for persistent data storage at work too - opting instead to write a service layer ontop of cassandra for storing data. Redis was tremendous in our journey up there - but one of the shortcomings is that it isn't as easy to scale-up as cassandra is if you haven't designed your system to scale-up on redis from when it was built (which we didn't) - instead of re-architecting for a redis-cluster setup, we decided to move the component to a clustered microservice written in go, that sits as a memory-cache & write buffer infront of cassandra for hot, highly mutated data.
- Nicely done. The introduction to Reactive Programming you've been missing: Reactive programming is programming with asynchronous data streams. In a way, this isn't anything new. Event buses or your typical click events are really an asynchronous event stream, on which you can observe and do some side effects. Reactive is that idea on steroids. You are able to create data streams of anything, not just from click and hover events. Streams are cheap and ubiquitous, anything can be a stream: variables, user inputs, properties, caches, data structures, etc. For example, imagine your Twitter feed would be a data stream in the same fashion that click events are. You can listen to that stream and react accordingly.
- James Hamilton takes us on a cool tour of laying cable...across the ocean. CS Responder Trans-Oceanic Cable Layer. The repeaters are huge. As is pretty much all the equipment involved.
- Vincent Granville lists 40 Techniques Used by Data Scientists. Ranges from Linear Regression, Monte-Carlo Simulation, and Experimental Design. Sadly, hallucinogens were not on the list.
- How removing caching improved mobile performance by 25%. If you do not need offline functionality then remove the HTML5 offline application cache. Use standard HTTP caching. This prevents duplicate requests from being made. For some cool debug take a look at chrome://net-internals
- Sending billions of emails without maintaining servers. Scaling Email Marketing to Infinity & Beyond by Going Serverless. Conversion of EC2 + Ruby or Rails to 70 lambda functions written in nodejs using the Serverless Framework. Shopify coded their front-end as a Single Page Application, built in React and Redux, hosted on an AWS S3 Bucket, and served through CloudFront with Route53 as DNS. REST Endpoints are exposed by API Gateway or SNS/Kinesis integration with a simple configuration. 100% automation capabilities is built into development Environments
- Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads. A test of both databases on the same hardware, using the same tools and tests. The result is still TBD. Stay tuned.
-
Gorgeous use of Google Maps. SACRAMENTO MURAL MAP.
-
Excellent Getting Started: Object Modeling with Go.
- Instrumentation: The First Four Things You Measure: counter of the number of requests in; counter of the number responses given, labeled by success/error; histogram of request duration, labelled by success/error; guage of the number of outgoing requests.
- citusdata/activerecord-multi-tenant: ActiveRecord/Rails integration for multi-tenant databases, in particular the Citus extension for PostgreSQL.
- evnm/research-in-production: A collection of research papers categorized by real-world systems that enact them
- Shasta: Interactive reporting at scale: You have vast database schemas with hundreds of tables, applications that need to combine OLTP and OLAP functionality, queries that may join 50 or more tables across disparate data sources...Shasta is the system that Google developed in response to these challenges. At the front-end, Shasta enables developers to define views and express queries in RVL, a new Relational View Language. Shasta translates RVL queries into SQL queries before passing them onto F1. Shasta does not rely on pre-computation, instead a number of optimisations in the underlying data infrastructure enable it to achieve the desired latency targets.
- Consistency in Non-Transactional Distributed Storage Systems: We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength”, which we believe will reveal useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.