« Stuff The Internet Says On Scalability For March 24th, 2017 | Main | Architecture of Probot - My Slack and Messenger Bot for Answering Questions »

Stuff The Internet Says On Scalability For March 17th, 2017

Hey, it's HighScalability time:


Can it be a coincidence trapping autonomous cars is exactly how demons are trapped on Supernatural?

If you like this sort of Stuff then please support me on Patreon.

  • billion billion: exascale operations per second; 250ms: connection time saved by zero round trip time resumption; 800 Million: tons of prey eaten by spiders; 90%: accuracy of quantum computer recognizing trees; 80 GB/s: S3 across 2800 simultaneous functions;

  • Quotable Quotes:
    • @GossiTheDog: Here's something to add to your security threat model: backups. Why steal live data and when you can drive away with exact replica?
    • @ThePublicSquare: "California produces 160% of its 1990 manufacturing, but with just 60% of the workers." -@uclaanderson economist Jerry Nickelsburg
    • @rbranson: makes total sense. I have a friend (who is VC-backed) that has stuff in Azure, GCloud, and AWS to maximize the free credits.
    • @AndrewYNg: If not for US govt funding (DARPA, NSF), US wouldn't be an AI leader today. Proposed cuts to science is big step in wrong direction.
    • @CodeWisdom: "To understand a program you must become both the machine and the program." - Alan Perlis 
    • @codemanship: What does it take to achieve Continuous Delivery? 1. Continuous testing. e.g., Google have 4.2M automated tests, run avg of 35x a day
    • @sebastianstadil: Azure Storage services are down. They really are doing everything like AWS. 😂
    • Mehta: A fundamental belief in neuroscience has been that neurons are digital devices. They either generate a spike or not. These results show that the dendrites do not behave purely like a digital device. Dendrites do generate digital, all-or-none spikes, but they also show large analogue fluctuations that are not all or none. 
    • @jasongorman: Shocked faces after I explain to a room of hipsters that a build script is basically just a batch file. Y'know? Like in the old days
    • William Woody: The problem is that our industry, unlike every other single industry except acting and modeling (and note neither are known for “intelligence”) worship at the altar of youth. I don’t know the number of people I’ve encountered who tell me that by being older, my experience is worthless since all the stuff I’ve learned has become obsolete.
    • @DavidBrin: Now even your sex toys are spying on you...
    • Counterintuitive things about testing: #6: service-oriented-architecture would be the worst thing you could possibly do.
    • industry7: Don't batch you changes together in a single branch. Each change goes in it's own feature branch, and each feature can be individually rapid fired through the pipeline. Conversely, if all your changes are in the same branch, you can't deploy them individually with docker anyway.
    • Mike Elgan: In other words, A.I. will use data on social networks to rank people based on how much they can be trusted. The worst part is that this trust-judging process happens invisibly behind the scenes. When you don't get that job or loan, you'll never know why.
    • @viktorklang: Most processors control execution by tracking completion dependencies, using the same techniques seen when programming CompletableFutures
    • @iamdevloper: Every functional programming tutorial... [picture of drawing an owl using two simple circles then showing a completely finished beautiful owl with no intermediate steps explained]
    • @PatrickMcFadin: Actual advice from an AWS Solution Architect - Don’t run active-active over multiple regions. AZs should be enough for availability. #lolwut
    • RightScale: We also compared the Google 3-year Committed Use Discount to the AWS 3-year Convertible RI. The total cost of the Google environment was 35 percent less than AWS.
    • @kelseyhightower: The container image is just a packaging concept; think of them as the price of admission to modern platforms such as Kubernetes.
    • Uber: The biggest problem we face is that most rules are effective for several weeks; then fraudsters adapt, and rules end up with more false positives.
    • David Rosenthal: Yet again the DNA enthusiasts are waving the irrelevant absolute cost decrease in reading to divert attention from the relevant lack of relative cost decrease in writing. They need an improvement in relative write cost of at least 6 orders of magnitude. To do that in a decade means halving the relative cost every year, not increasing the relative cost by 10-15% every year.
    • @jakub_zalas: Law of code reviews: feedback is inversely proportional to the size of merge request
    • David Rosenthal: There is no way to greatly improve Web archiving without significantly increased resources. Library and archive budgets have been under sustained attack for years. Neither I nor Leetaru has any idea where an extra $30-50M/yr would come from. Much less isn't going to stop the rot.
    • @whispersystems: Ubiquitous e2e encryption is pushing intelligence agencies from undetectable mass surveillance to expensive, high-risk, targeted attacks
    • @b6n: the year is 2217, we have survived global warming and the water riots. ORMs are still a shitshow.
    • Google: Then there’s our improved Free Tier. First, we’ve extended the free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and on your own schedule.
    • @JoeEmison: But once an organization is buying, none of these services are fungible enough where the price difference is more than switching costs.
    • Pascal Bestebroer: What ever it is that’s holding you back on covering all platforms, I promise you the work involved to fix that is far less than creating a new game.
    • Quantum Gravity Research: We view consciousness as both emergent and fundamental. In its fundamental form, consciousness exists inside every tetrahedron/pixel in the 3D quasicrystal in the form of something we call viewing vectors. 
    • @hichaelmart: Essentially AppEngine Flexible requires you to specify an auto-scaling group
    • Segment: Because outsourcing infrastructure is so damn easy (RDS, Redshift, S3, etc), it’s easy to fall into a cycle where the first response to any problem is to spend more money.

  • Here's how Segment saved $1 million per year on their AWS bill in three months. Their detective efforts are interesting and detailed. Lots to learn from. It probably should not be a surprise that AWS doesn't make it easy to figure out where there are opportunities to save money. Process: scrutinize every single resource in your bill line-by-line; enable AWS Detailed billing; import the raw log file into Redshift (which ironically costs money); deep analysis netted a list of the top ~15 problem areas, which totaled up to around 40% of the monthly bill. Sources:  hundreds of large EBS drives, over-provisioned cache and RDS instances; DynamoDB hot shards ($300,000 annually); Service auto-scaling ($60,000 annually); Bin-packing and consolidating instance types ($240,000 annually). It takes engineering effort to decide if these costs are necessary or if there's a way to make changes to bring down costs.  Fixes: better DynamoDB partition key selection; better auto-scaling; move to bigger instances and pack 100-200 containers per instance. Lesson: most important investment is to prevent problems from occurring in the first place.

  • Skynet is making great progress in getting its world spanning communications network up and running. LoRaWAN by flying helium balloons. @SpaceX by blasting satellites into geosynchronous orbit.

  • A smart approach. @radugrama: We currently use AWS API Gateway and AWS Lambda. Lambda allows us to take an Express app as is and upload it as a single function, with minor changes using AWS Serverless Express or Lambda Express. From a performance prospective, it is not the ideal architecture, but it allows us to move to a different setup (like GCP Containers for example) in no time. AWS API Gateway allows us to have a basic API Gateway in front of the API and use their proxy functionality to pass all requests through to the Lambda function implemented in Express. We use Claudia.js to deploy the API. Again, what we have is not an ideal implementation from a server-less/micro-services architecture prospective, we deploy a bit of a silo and we don't necessarily like it. But, we don't get locked into a platform (Azure, AWS, or GCP), we don't have to deal with a deployment nightmare (shared code, dependencies, etc.), and we get a basic API Gateway with minimal configuration.

  • Maybe all the big brains the universe could take a moment from improving ad targeting and extinctify spear phishing? @pwnallthethings: Aha. And that's where the nation state hackers using "cookie forgery" thing a few weeks back came from

  • GDC (Game Developers Conference) 2017 videos are now now available

  • How general is the serverless model? Occupy the Cloud: Distributed Computing for the 99%: In this paper we argue that a serverless execution model with stateless functions can enable radically simpler, fundamentally elastic, and more user-friendly distributed data processing systems...Surprisingly, we find that the performance degradation from using such an approach is negligible and thus our simple primitive is in fact general enough to implement a number of higher-level data processing abstractions...if we consider the current constraints of AWS Lambda we see that each Lambda has around 35 MB/s bandwidth to S3 and can thus fill up its memory of 1.5GB in around 40s. Assuming it takes 40s to write output, we can see that the running time of 300s is appropriately proportioned for around 80s of I/O and 220s of compute. As memory capacity and network bandwidths grow, this rule can be used to automatically determine memory capacity given a target running time...we show that using stateless functions with remote storage, we can build a data processing system that inherits the elasticity, simplicity of the serverless model while providing a flexible building block for more complex abstractions.

  • Developer Hiring Trends in 2017. According to StackOveflow the jobs that are in low supply and high demand are: Cloud (Back End), iOS, Android. 

  • And the winner is...Google Cloud. A Tale of Two Clouds: Amazon vs. Google: Google Cloud wins on pricing; AWS wins on market share and offerings; Google Cloud wins on instance configuration; Google Cloud wins on the free trial. Bottom Line: In my experience, the Google Cloud’s intuitive interface, coupled with cheaper costs, flexible compute options, pay-per-minute pricing, and preemptible instances make the Google Cloud Platform a very attractive alternative to AWS.

  • Do you have your own suggestions? George Dyson’s Selections for The Manual for Civilization. He's partial to H.G. Wells. A book I would add is Restoration Agriculture by Mark Shepard. A very practical guide to growing lots of calories.

  • When should you use what on the Goolge Cloud? Use Firebase with Cloud Functions for mobile applications. Use Google App Engine for a full-stack web applications. Use Cloud Functions for narrow services, APIs, and to handle cloud events.

  • Time is a risk few people consider. Marc Rogers~ One of the biggest problems you are going to see in the next few years is all these fancy touch screen controls in your fridge, in your house, in the fancy office you are in, in your car, they all have a different lifecycle than the one the manufacturer was thinking they were getting. Most operating systems die between two to four years. Some last as much as five. Most refridgerators are kept for at least 10 years. Most cars are kept for 10-15 years. Houses are kept for 20+ years. Buildings are kept for 50+ years. And yet in all of these systems you are going to put in place Ubuntu or RedHat systems that will deprecated long before the end of the life cycle. How do you patch something like that?

  • Quite a few good options. Ask HN: What are some good technology blogs to follow? The Morning Paper seems to be #1 and it's hard to disagree with that. 

  • JuergenSchmidhuber: it seems obvious to me that art and science and music are driven by the same basic principle. I think the basic motivation (objective function) of artists and scientists and comedians is data compression progress, that is, the first derivative of data compression performance on the observed history. A physicist gets intrinsic reward for creating an experiment leading to observations obeying a previously unpublished physical law that allows for better compressing the data. A composer gets intrinsic reward for creating a new but non-random, non-arbitrary melody with novel, unexpected but regular harmonies that also permit compression progress of the learning data encoder. A comedian gets intrinsic reward for inventing a novel joke with an unexpected punch line, related to the beginning of his story in an initially unexpected but quickly learnable way that also allows for better compression of the perceived data. In a social context, all of them may later get additional extrinsic rewards, e.g., through awards or ticket sales.

  • Great Conference Recap: Google Cloud Next: Three broad themes emerged from the many keynotes and 200+ sessions: service scale and maturity, usable machine learning, and enterprise-friendliness.

  • $3m vs $200. The same exponential cost curves driving the computer industry will transform warfare and national defense. This kind of inefficiency argues for disruption. Small drone 'shot with Patriot missile'. According to a US general a Patriot missile ($3m) was used to shoot down a small quadcopter drone ($200). 

  • We are starting to see a lot more Google Cloud stuff happening. New White Paper: How to Build a SQL Server Disaster Recovery Plan with Google Compute Engine.

  • Facebook showing off their recent open hardware creations. The end-to-end refresh of our server hardware fleet: Bryce Canyon is our first major storage chassis; Big Basin is the successor to our Big Sur GPU server; Tioga Pass is the successor to Leopard, which is used for a variety of compute services; Yosemite v2 uses a new 4 OU vCubby chassis design.

  • Fascinating use of technology. Microloans are taken out early in the morning for working capital and paid back at night after a day of business. @pesa_africa: Here is why most mobile micro credit loans in Kenya are taken out before 3am and 5am. @pesa_africa: The process literally sustains businesses daily. This practice contributes significantly to the informal sector of the economy

  • Apparently it is possible to build a private OpenStack-based cloud. Ivan Pepelnjak with a nice gloss: Worth Reading: Building an OpenStack Private Cloud. Here's how: i2 Private Cloud OpenStack Reference Architecture. It took 6 weeks for a proof of concept, 8 months for a pilot, 10 weeks to migrate applications to the pilot, 12-18 to migrate the rest of the workload, 18 months to decommission the old hardware. Big lesson is don't roll your own, use a proven prebuilt distribution.

  • Fireside Magazine uses GitHub as their CMS. About Our New Site: GitHub offers a service called GitHub Pages, which allows you to serve a website directly from the code in your GitHub repository. As an added bonus, GitHub Pages works really well with Jekyll, specifically. This allowed me to automate the last bit of complexity: telling Jekyll to build the site files whenever an editor adds or edits content, and updating the site with the new files. So, putting all this together, it turns out that with a little bit of Markdown and Git knowledge, we could make Fireside production as simple as editing some plain text files in a shared folder, and making a push to the Fireside git repository.

  • Really good discussion on Node vs .Net/C#. Raygun increases throughput by 2,000 percent (over node.js) with .NET Coresgoody: I'm not against NodeJS, but I suspect that async C# would be as fast or faster than NodeJS/V8, not to mention that CPU bound tasks would be much much faster and there are lots of additional benefits to using C#/.Net too. CmdrKeen4: We used Node.JS in our API nodes, which is part of an auto-scaling group behind ELB's in AWS. Raygun processes billions of inbound messages every day, and the volumes are very spiky by nature. CmdrKeen4: The primary bottleneck was CPU. Additionally we seemed to lose a bunch of time to the hand off to our queuing service. It would block. Furthermore, it's a bit of a kludge (not saying it's the worst thing) to have Node take advantage of all cores. We were using said kludge, but it's not exactly designed from the ground up for multi-core.

  • The problem with algorithms is they actually believe people with a childlike naivety. Early on authors, for example, were able to game Amazon's book recommendation algorithm simply by buying two books close together. The Art Of Manipulating Algorithms: Matias’s “AI nudge” similarly prods users to simply think critically, which persuades the algorithm and, in turn, benefits other users. It’s proof that humans, collectively, can influence the way algorithms behave–with help in the form of a frequent reminder from moderators, developers, or designers. 

  • Malware writers, the CIA has some advice for you. The CIA's "Development Tradecraft DOs and DON'Ts". Note, even the CIA says encryption works:  DO use end-to-end encryption for all network communications. NEVER use networking protocols which break the end-to-end principle with respect to encryption of payloads. Rationale: Stifles network traffic analysis and avoids exposing operational/collection data.

  • HubSpot Lessons Learned from Last Week's S3 Outage: Centralize creation of S3 clients; You probably don't want the default timeouts; Use circuit-breakers and bulkheading to fail better.

  • PostgreSQL is going parallel. Parallel Query v2. Next version will have: Parallel Bitmap Heap Scan; Parallel Index Scan; Gather Merge; Parallel Merge Join; Subplan-Related Improvements; Parallel CREATE INDEX; Better Parallel Hash Join; Parallel Append; Allow Parallel Query at SERIALIZABLE.

  • Dropping some serious knowledge here. Many SQL Performance Problems Stem from “Unnecessary, Mandatory Work”: While optimisers have become quite smart these days, this work is mandatory for the database. There’s no way the database can know that the client application actually didn’t need 95% of the data…So what, you think? Databases are fast? Let me offer you some insight you may not have thought of... We’re using 8x too much memory...when we write SELECT *, we create needless, mandatory work for the database, which it cannot optimise...Our joins were eliminated, because the optimiser could prove they were needless...One of most ORM’s most unfortunate problems is the fact that they make writing SELECT * queries so easy to write...Some of the worst wastes of resources is when people run COUNT(*) queries when they simply want to check for existence...The solution is always the same. The more information you give to the entity executing your command, the faster it can (in principle) execute such command. Write a better query. Every time.

  • Times are a changin. Why we discontinued our Android / iOS SDK and Why JavaScript is the future of app development: Today, less than 6% of requests are from native apps...We honestly think JavaScript will take over the world of app development — be it web, mobile, or desktop with one codebase being able to run on all the platforms...We’re now at a tipping point where investing in Android and iOS SDK’s makes very little sense heading forward. We do recommend teams thinking of working on iOS and Android to consider and review React Native / Ionic before making a decision.

  • The Cloudcast~ Each public cloud will have their own differentiator. For Google it's machine learning. For AWS it's IoT. All will have the basic plumbing. 

  • The title does not lie. A comprehensive dive into WebRTC for client-server web games. A must read if this is something you've thought of doing. There's a lot going on here. Why might you want to do this? To get League of Legends in a browser. WebRTC is a browser API that enables real-time communication for peer-to-peer connections. WebRTC vs WebSockets, WebRTC has lower latency with a narrower variance. The cost is much more complexity.

  • Bayesian Ranking for Rated Items. Solving the problem: You have a catalog of items with discrete ratings (thumbs up/thumbs down, or 5-star ratings, etc.), and you want to display them in the “right” order.

  • Benchmarking Akumuli on 32-core machine: Recently I tested Akumulil on the m3.2xlarge EC2 instance. Write throughput was around 4.5 million elements/second. This number might look unrealistic at first but in fact, that’s less than 20MB/s of disk write throughput because each data point is tiny (less than five bytes in that particular case) and all data is compressed in real time.

  • Migrating our analytics stack from MongoDB to AWS Redshift: It’s difficult to tell at this point the exact cost of the switch to Redshift but we estimate it will cost around 40% less than our MongoDB stack. But for the moment the highest cost is the human cost. The migration required one to two engineers to work for several weeks.

  • Protect your database from being taken hostage. Network attacks on MySQL, Part 1: Unencrypted connections: Use SSL/TLS; Encrypt/Decrypt values; Use a SSH tunnel; Use a local TCP or UNIX domain socket when changing passwords; Don't use the MySQL protocol over the internet w/o encryption. Network attacks on MySQL, Part 2: SSL stripping with MySQL: Set REQUIRE SSL on accounts which should never use unencrypted connections; On the client use --ssl-mode=REQUIRED to force the use of SSL.

  • Running Spark SQL CERN queries 5x faster on SnappyData. Biggest improvements in Spark 2.0 were from Whole-Stage Code Generation: uses multiple operators together into a single Java function that is aimed at improving execution performance. It collapses a query into a single optimized function that eliminates virtual function calls and leverages CPU registers for intermediate data. A further 4-5x improvement by improving the query planner to use local joins, better partitioning, and better hash aggregation and hash join operators.

  • jostmey/NakedTensor: This is a bare bones example of TensorFlow, a machine learning package published by Google. You will not find a simpler introduction to it.

  • awslabs/aws-serverless-express: Run serverless applications and REST APIs using your existing Node.js application framework, on top of AWS Lambda and Amazon API Gateway

  • FaunaDB: a distributed, multi-tenant, multi-model database system with a powerful query language. FaunaDB allows you to store objects and query them in a relational fashion. In this tutorial, we will learn how to create blog posts, update them with additional attributes and query for specific posts.

  • IT Hare has dropped another chapter: TCP Peculiarities as Applied to Games, Part II.

  • Python abstract syntax tree: AST is a tree that represents the structure of a program. For example, using ASTs, you can write a Python program that takes another program as an input (from uber)

  • WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks: Our work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs.

  • Existential Consistency: Measuring and Understanding Consistency at Facebook: Our analysis shows that 0.0004% of reads to vertices would return different results in a linearizable system. This in turn gives insight into the benefits of stronger consistency; 0.0004% of reads are potential anomalies that a linearizable system would prevent. We directly study local consistency models—i.e., those we can analyze using requests to a sample of objects—and use the relationships between models to infer bounds on the others

  • End-to-End Prediction of Buffer Overruns from Raw Source Code via Neural Memory Networks: Our experimental results using source codes demonstrate that our proposed model is capable of accurately detecting simple buffer overruns. We also present in-depth analyses on how a memory network can learn to understand the semantics in programming languages solely from raw source codes, such as tracing variables of interest, identifying numerical values, and performing their quantitative comparisons.

  • Bayesian Reasoning and Machine Learning: The book is designed to appeal to students with only a modest mathematical background in undergraduate calculus and linear algebra. No formal computer science or statistical background is required to follow the book, although a basic familiarity with probability, calculus and linear algebra would be useful.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>