Stuff The Internet Says On Scalability For March 25th, 2016

Did you know there's a field called computational aesthetics? Neither did I. It's cool though.


If you like this sort of Stuff then please consider offering your support on Patreon.

  • 51%: of billion-dollar startups founded by immigrants; 2.8 billion: Twitter metric ingestion service writes per minute; 1 billion: Urban Airship push notifications a day; 1.5 billion: Slack messages sent per month; 35 million: server nodes in the world; 10: more regions will be added to Google Cloud;  697 million: WeChat active monthly users; 

  • Quotable Quotes:
    • Dark Territory: When officials in the Air Force or the NSA neglected to let Microsoft (or Cisco, Google, Intel, or any number of other firms) know about vulnerabilities in its software, when they left a hole unplugged so they could exploit the vulnerability in a Russian, Chinese, Iranian, or some other adversary’s computer system, they also left American citizens open to the same exploitations—whether by wayward intelligence agencies or by cyber criminals, foreign spies, or terrorists who happened to learn about the unplugged hole, too. 
    • @xaprb: If you adopt a microservices architecture with 1000x more things to monitor, you should not expect your monitoring cost to stay the same.
    • The Swrve Monetization Report 2016: almost half of all the revenue generated in mobile gaming comes from just 0.19 percent of users.
    • Nassim Taleb: Now some empiricism. Consider that almost all tech companies "in the tails" were not started by "funding". Take companies you are familiar with: Microsoft, Apple, Google, Facebook. These companies started with risk-taking. Funding came in small amounts, way later.
    • @leegomes: In a big shift, Google says a go-anywhere self-driving car might not be ready for 30 years.
    • Google’s Eric Schmidt: Machine learning will be basis of ‘every huge IPO’ in five years.
    • @brendangregg: "Memory bandwidth is the number one issue we see today" Denis at Facebook
    • @ogrisel: PostgreSQL 9.6 will support parallel aggregation! TPC-H Q1 @ 100GB benchmark shows linear scaling up to 30 workers 
    • @sarah_edo: The hardest part of being a developer isn't the code, it's learning that the entire internet is put together with peanut butter and goblins.
    • @beaucronin: "Cryptocurrencies are an emergent property of the Internet – almost a fifth protocol"
    • Thomas Frey: We are moving toward an era of megaprojects. We’ll finish the Pan-American Highway with a 25-mile bridge over the Darien Gap in Panama. 
    • @samphippen: “Do you expect me to talk?” “No Mr. Bond, I expect you to be willing to relocate to san francisco"
    • @brendanbaker: Outside of the core people, who actually know what they're doing, AI is talked about like gamification was three years ago.
    • @RichRogersHDS: Did you know? The collective noun for a group of programmers is a merge-conflict." - @omervk
    • @jbeda: This is how you know Google is serious about cloud. Real money on real facilities. 
    • Farhad Manjoo: The lesson so far in the on-demand world is that Uber is the exception, not the norm. Uber, but for Uber — and not much else.
    • @DKThomp: Airbnb woulda made a killing in 1900: One third of urban families used to make 10%+ of their income from "lodgers" 
    • @AstroKatie: "We can make 'smart drones'!" "Your chatbot became a Nazi in like a day." "OK good point."
    • @adrianco: I agree GCP are setup for next gen apps, think they are missing out on where most of the $ are being spent in the short term.
    • @EdwardTufte: Like book publishers and Silicon Valley, the further the distance from content production, the greater the money. 
    • Biz Carson: Slack grew from 80 to 385 employees in 14 months
    • Chip Overclock®: One of those things is being evidence-based. Don't guess. Test. Measure. Look and see. Ask. If you can avoid guessing, do so.

  • Impressive demo of the new smaller, less dorky looking Meta augmented reality headset. Here's a hands on report. The development kit is $949. This most likely will be the new app store level opportunity so it might be smart to get on it now. The Gold Rush phase is still in the future. The uses are obvious to anyone who reads Science Fiction. This is a TED talk, so of course no details on performance, etc. What are the backend infrastructure opportunities? Hopefully they'll keep all that open instead of building another walled garden.

  • Is artificial intelligence ready to rule the world? IMHO: No. You would need a large training set. The problem is we have so few good examples of ruling the world successfully. You could create an artificial world in VR with a simulated world to generate training data, but that's just another spin on in the long history of Utopian thinking. We should probably learn to govern ourselves first before we pitch it over to an AI.

  • "It's better to have a media strategy than a security strategy." That's Greg Ferro commenting in an episode of Network Break on Home Depot's paltry $19.5 million fine for their massive 2014 data breach. Why pay for security when there's no downside? It's not like people stopped shopping at Home Depot. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


What does Etsy's architecture look like today?

This is a guest post by Christophe Limpalair based on an interview (video) he did with Jon Cowie, Staff Operations Engineer and Breaksmith @ Etsy.

Etsy has been a fascinating platform to watch, and study, as they transitioned from a new platform to a stable and well-established e-commerce engine. That shift required a lot of cultural change, but the end result is striking.

In case you haven't seen it already, there's a post from 2012 that outlines their growth and shift. But what has happened since then? Are they still innovating? How are engineering decisions made, and how does this shape their engineering culture? These are questions we explored with Jon Cowie, a Staff Operations Engineer at Etsy, and the author of Customizing Chef, in a new podcast episode.

What does Etsy's architecture look like nowadays?

Click to read more ...


To Compress or Not to Compress, that was Uber's Question

Uber faced a challenge. They store a lot of trip data. A trip is represented as a 20K blob of JSON. It doesn't sound like much, but at Uber's growth rate saving several KB per trip across hundreds of millions of trips per year would save a lot of space. Even Uber cares about being efficient with disk space, as long as performance doesn't suffer. 

This highlights a key difference between linear and hypergrowth. Growing linearly means the storage needs would remain manageable.  At hypergrowth Uber calculated when storing raw JSON, 32 TB of storage would last than than 3 years for 1 million trips, less than 1 year for 3 million trips, and less 4 months for 10 million trips.

Uber went about solving their problem in a very measured and methodical fashion: they tested the hell out of it. The goal of all their benchmarking was to find a solution that both yielded a small size and a short time to encode and decode.

The whole experience is described in loving detail in the article: How Uber Engineering Evaluated JSON Encoding and Compression Algorithms to Put the Squeeze on Trip Data. They came up with a matrix of 10 encoding protocols (Thrift, Protocol Buffers, Avro, MessagePack, etc) and 3 compression libaries (Snappy, zlib, Bzip2). The target environment was Python. Uber went to an IDL approach to define and verify their JSON protocol, so they ended up only considering IDL solutions. 

The conclusion: MessagePack with zlib.  Encoding time: 4231 ms. Decoding: 715 ms. There was a 78% reduction in size relative to the JSON zlib combination.

The result: 1 TB disk will now last almost a year (347 days), compared to a month (30 days) without compression. Uber now has enough space to last over 30 years compared to just under 1 year before. That's a huge win for a relatively simple change. Hopefully there's a common library handling all the messaging so this change could be completely transparent to all the developers. Uber also noted that smaller packet sizes mean less data transiting through the system which means less processing time which means less hardware is needed. Another big win.

Something to consider: don't use JSON for messaging. The compression/decompression times are still dog slow. If you are going to use an IDL, which every grown up project eventually moves to for reliability and security reasons, consider not using JSON for messaging. Go for a binary protocol from the start. The performance savings can be dramatic. A lot of the performance drain comes from serialization/deserialization churning through memory and that can be reduced greatly by not using text based protocols like JSON. JSON is convenient, but it's also hugely wasteful at scale. 


Stuff The Internet Says On Scalability For March 18th, 2016

We come in peace. 5,000 years of battles mapped from Wikipedia. Maybe not.


If you like this sort of Stuff then please consider offering your support on Patreon.


  • 500 petabytes: data stored in Dropbox; 8.5 kB: amount of drum memory in an IBM 650; JavaScript: most popular programming language in the world (OMG); $20+ billion: Twitch in 2020; Two years: time it took to fill the Mediterranean; 

  • Quotable Quotes:
    • Dark Territory: The other bit of luck was that the Serbs had recently given their phone system a software upgrade. The Swiss company that sold them the software gave U.S. intelligence the security codes.
    • Alec Ross~ The principle political binary of the 20th century is left versus right. In the 21st century the principle political binary is open versus closed. The real tension both inside and outside countries are those that embrace more open economic, political and cultural systems versus those that are more closed. Looking forward to the next 20 years the states and societies that are more open are those that will compete and succeed more effectively in tomorrows industry.
    • @chrismaddern"Population size: 1. Facebook 2. China ๐Ÿ‡จ๐Ÿ‡ณ 3. India ๐Ÿ‡ฎ๐Ÿ‡ณ 4. Whatsapp 5. WeChat 6. Instagram 7. USA ๐Ÿ‡บ๐Ÿ‡ธ 8. Twitter 9. Indonesia ๐Ÿ‡ฎ๐Ÿ‡ฉ 10. Snapchat" 
    • @grayj_: Assuming you've already got reasonable engineering infrastructure in place, "add RAM or servers" is waaay cheap vs. engineering hours.
    • @qntm: "I love stateless systems." "Don't they have drawbacks?" "Don't what have drawbacks?"
    • jamwt: [Dropbox] Performance is 3-5x better at tail latencies [than S3]. Cost savings is.. dramatic. I can't be more specific there. Stability? S3 is very reliable, Magic Pocket is very reliable. I don't know if we can claim to have exceeded anything there yet, just because the project is so young, and S3s track record is long. But so far so good. Size? Exabytes of raw storage. Migration? Moving the data online was very tricky!
    • Facebook: because we have a large code base and because each request accesses a large amount of data, the workload tends to be memory-bandwidth-bound and not memory-capacity-bound.
    • fidget: My guess is that it's pretty much just BigQuery. No one else seems to be able to compete, and that's a big deal. The companies moving their analytics stacks to BQ and thus GCP probably make up the majority (in terms of revenue) of customers for GCP
    • outside1234: There is no exodus [from AWS]. There are a lot of companies moving to multi-cloud, which makes sense from a disaster recovery perspective, a negotiating perspective, and possibly from cherry picking the best parts of each platform. This is what Apple is doing. They use AWS and Azure already in large volume. This move adds the #3 vendor in cloud to mix and isn't really a surprise.
    • @mjpt777: At last AMD might be getting back in the game with a 32 core chip and 8 channels of DDR4 memory. http://www.
    • phoboslab: Can someone explain to me why traffic is still so damn expensive with every cloud provider? A while back we managed a site that would serve ~700 TB/mo and paid about $2,000 for the servers in total (SQL, Web servers and caches, including traffic). At Google's $0.08/GB pricing we would've ended up with a whooping $56,000 for the traffic alone. How's that justifiable?
    • Joe Duffy: First and foremost, you really ought to understand what order of magnitude matters for each line of code you write.
    • mekanikal_keyboard: AWS has allowed generation of developers to focus on their ideas instead of visiting a colo at 3am
    • @susie_dent: 'Broadcast' first meant the widespread scattering of seeds rather than radio signals. And the 'aftermath' was the new grass after harvest.
    • @CodeWisdom: "Simplicity & elegance are unpopular because they require hard work & discipline to achieve & education to be appreciated." - Dijkstra
    • @vgr: In fiction, a 750 page book takes 5x as long to read as 150 page book. In nonfiction, 25x. Ergo: purpose of narrative is linear scaling.
    • @danudey: and yes, for us having physical servers for 2x burst capacity is cheaper than having AWS automatically scaling to our capacity.
    • @grayj_: "How do you serve 10M unique users per month" CDN, caching, minimize dynamics, horizontal scaling...also what's really hard is peak users.
    • @cutting: "Improve BigQuery ingestion times 10x by using Avro source format"
    • jamwt: Both companies control the technology that most impacts their business. Dropbox stores quite a bit more data than Netflix. Data storage is our business. Ergo, we control storage hardware. On the other hand, Netflix pushes quite a bit more bandwidth than almost everyone, including us. Ergo, Netflix tightly manages their CDN. 
    • james_cowling: We [Dropbox] were pushing over a terabit of data transfer at peak, so 4-5PB per day.
    • Dark Territory: communications between Milosevic and his cronies, many of them civilians. Again with the assistance of the NSA, the information warriors mapped this social network, learning as much as possible about the cronies themselves, including their financial holdings. As one way to pressure Milosevic and isolate him from his power base, they drew up a plan to freeze his cronies’ assets.

  • The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire. Why? Follow the money. Dropbox operates at scale where they can get “substantial economic value” by moving. And it's an opportunity to specialize. Dropbox is big enough now that they can go the way Facebook and Google and build their own custom most everything. Dropbox: designed a sweeping software system that would allow Dropbox to store hundreds of petabytes of data...and store it far more efficiently than the company ever did on Amazon S3...Dropbox has truly gone all-in...created its own software for its own needs...designed its own computers...each Diskotech box holds as much as a petabyte of data, or a million gigabytes...The company was installing forty to fifty racks of hardware a the middle of this two-and-half-year project, they switched to Rust (away from Go). Excellent discussion on HackerNews.

  • So you are saying there's a chance? AlphaGo defeats Lee Sedol 4–1. From a battling Skynet perspective it's comforting to remember any AI has an exploitable glitch somewhere in it's training. Those neural nets are very complex functions and we know complexity by it's very nature is unstable. We just need to create AIs that can explore other AIs for patches in their neural net that can be exploited. Like bomb sniffing dogs.

  • Here's a cool game that doesn't even require a computer. Do These Liquids Look Alive? Be warned, it involves a lot of tension, but that's just touching the surface of all the fun you can have.

  • 10M Concurrent Websockets: Using a stock debian-8 image and a Go server you can handle 10M concurrent connections with low throughput and moderate jitter if the connections are mostly idle...This is a 32-core machine with 208GB of memory...At the full 10M connections, the server's CPUs are only at 10% load and memory is only half used.

  • Yes, hyperscalers are different than the rest of us. Facebook's new front-end server design delivers on performance without sucking up power: Given a finite power capacity, this trend was no longer scalable, and we needed a different solution. So we redesigned our web servers to pack more than twice the compute capacity in each rack while maintaining our rack power budget. We also worked closely with Intel on a new processor to be built into this design.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


Jeff Dean on Large-Scale Deep Learning at Google

If you can’t understand what’s in information then it’s going to be very difficult to organize it.


This quote is from Jeff Dean, currently a Wizard, er, Fellow in Google’s Systems Infrastructure Group. It’s taken from his recent talk: Large-Scale Deep Learning for Intelligent Computer Systems.

Since AlphaGo vs Lee Se-dol, the modern version of John Henry’s fatal race against a steam hammer, has captivated the world, as has the generalized fear of an AI apocalypse, it seems like an excellent time to gloss Jeff’s talk. And if you think AlphaGo is good now, just wait until it reaches beta.

Jeff is referring, of course, to Google’s infamous motto: organize the world’s information and make it universally accessible and useful.

Historically we might associate ‘organizing’ with gathering, cleaning, storing, indexing, reporting, and searching data. All the stuff early Google mastered. With that mission accomplished Google has moved on to the next challenge.

Now organizing means understanding.

Some highlights from the talk for me:

  • Real neural networks are composed of hundreds of millions of parameters. The skill that Google has is in how to build and rapidly train these huge models on large interesting datasets, apply them to real problems, and then quickly deploy the models into production across a wide variery of different platforms (phones, sensors, clouds, etc.).

  • The reason neural networks didn’t take off in the 90s was a lack of computational power and a lack of large interesting data sets. You can see how Google’s natural love of algorithms combined with their vast infrastructure and ever enlarging datasets created a perfect storm for AI at Google.

  • A critical difference between Google and other companies is that when they started the Google Brain project in 2011, they didn’t keep their research in the ivory tower of a separate research arm of the company. The project team worked closely with other teams like Android, Gmail, and photos to actually improve those properties and solve hard problems. That’s rare and a good lesson for every company. Apply research by working with your people.

  • This idea is powerful: They’ve learned they can take a whole bunch of subsystems, some of which may be machine learned, and replace it with a much more general end-to-end machine learning piece. Often when you have lots of complicated subsystems there’s usually a lot of complicated code to stitch them all together. It’s nice if you can replace all that with data and very simple algorithms.

  • Machine learning will only get better, faster. A paraphrased quote from Jeff: The machine learning community moves really really fast. People publish a paper and within a week lots of research groups throughout the world have downloaded the paper, have read it, dissected it, understood it, implemented some extensions to it, and published their own extensions to it on It’s different than a lot other parts of computer science where people would submit a paper, and six months later a conference would decide to accept it or not, and then it would come out in the conference proceeding three months later. By then it’s a year. Getting that time down from a year to a week is amazing.

  • Techniques can be combined in magical ways. The Translate Team wrote an app using computer vision that recognizes text in a viewfinder. It translates the text and then superimposes the translated text on the image itself. Another example is writing image captions. It combines image recognition with the Sequence-to-Sequence neural network. You can only imagine how all these modular components will be strung together in the future.

  • Models with impressive functionality are small enough run on Smartphones. For technology to disappear intelligence must move to the edge. It can’t be dependent on network umbilical cord connected to a remote cloud brain. Since TensorFlow models can run on a phone, that might just be possible.

  • If you’re not considering how to use deep neural nets to solve your data understanding problems, you almost certainly should be. This line is taken directly from the talk, but it’s truth is abundantly clear after you watch hard problem after hard problem made tractable using deep neural nets.

Jeff always gives great talks and this one is no exception. It’s straightforward, interesting, in-depth, and relatively easy to understand. If you are trying to get a handle on Deep Learning or just want to see what Google is up to, then it's a must see.

There’s not a lot of fluff in the talk. It’s packed. So I’m not sure how much value add this article will give you. So if you want to just watch the video I’ll understand.

As often happens with Google talks there’s this feeling you get that we’ve only been invited into the lobby of Willy Wonka’s Chocolate factory. In front of us is a locked door and we're not invited in. What’s beyond that door must be full of wonders. But even Willy Wonka’s lobby is interesting.

So let’s learn what Jeff has to say about the future…it’s fascinating...

What is Meant by Understanding?

Click to read more ...


Sponsored Post: zanox Group, Varnish, LaunchDarkly, Swrve, Netflix, Aerospike, TrueSight Pulse, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • The zanox Group are looking for a Senior Architect. We're looking for someone smart and pragmatic to help our engineering teams build fast, scalable and reliable solutions for our industry leading affiliate marketing platform. The role will involve a healthy mixture of strategic thinking and hands-on work - there are no ivory towers here! Our stack is diverse and interesting. You can apply for the role in either London or Berlin.

  • Swrve -- In November we closed a $30m funding round, and we’re now expanding our engineering team based in Dublin (Ireland). Our mobile marketing platform is powered by 8bn+ events a day, processed in real time. We’re hiring intermediate and senior backend software developers to join the existing team of thirty engineers. Sound like fun? Come join us.

  • Senior Service Reliability Engineer (SRE): Drive improvements to help reduce both time-to-detect and time-to-resolve while concurrently improving availability through service team engagement.  Ability to analyze and triage production issues on a web-scale system a plus. Find details on the position here:

  • Manager - Performance Engineering: Lead the world-class performance team in charge of both optimizing the Netflix cloud stack and developing the performance observability capabilities which 3rd party vendors fail to provide.  Expert on both systems and web-scale application stack performance optimization. Find details on the position here

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics

Fun and Informative Events

  • Varnish Summits are a worldwide event series where Varnish customers, partners, open source users and other enthusiasts come together to network and learn.  At the summits Varnish Software's experts and core developers do a deep dive into technical best practices and offer workshops for both new and advanced Varnish users.

  • Database Trends and Applications (DBTA) hosts a 1-hour live roundtable webcast on Thursday, April 7 at 11:00am PST / 2:00pm EST on the topic of "Leveraging Big Data with Hadoop, NoSQL and RDBMS". Presenters include Brian Bulkowski, CTO and Co-founder of Aerospike; Kevin Petrie, Sr. Director & Technology Evangelist at Attunity; and Reiner Kappenberger, Global Product Manager at Hewlett Packard Enterprise Security. Sign up here to reserve your seat!

Cool Products and Services

  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Click to read more ...


Snuggling Up to Papers We Love - What's Your Favorite Paper?

From a talk by @aysylu22 at QCon London on modern computer science applied to distributed systems in practice.



There has been a renaissance in the appreciation of computer science papers as a relevant source of wisdom for building today's complex systems. If you're having a problem there's likely some obscure paper written by a researcher twenty years ago that just might help. Which isn't to say there aren't problems with papers, but there's no doubt much of the technology we take for granted today had its start in a research paper. If you want to push the edge it helps to learn from primary research that has helped define the edge.

If you would like to share your love of papers, be proud, you are not alone:

What's Your Favorite Paper? 

If you ask your average person they'll have a favorite movie, book, song, or Marvel Universe character, but it's unlikely they'll have have a favorite paper. If you've made it this far that's probably not you.

My favorite paper of all time is without a doubt SEDA: An Architecture for Well-Conditioned, Scalable Internet Services. After programming real-time distributed systems for a long time I was looking to solve a complex work scheduling problem in a resource constrained embedded system. I stumbled upon this paper and it blew my mind. While I determined that the task scheduling latency of SEDA wouldn't be appropriate for my problem, the paper gave me a whole new way to look out how programs were structured and I used those insights on many later projects.

If you have another source of papers or a favorite paper please feel free to share.

Related Articles


Stuff The Internet Says On Scalability For March 11th, 2016

The circle of life. Traffic flow through microservices at Netflix (Rob Young)


If you like this sort of Stuff then please consider offering your support on Patreon.
  • 400Gbps: DDoS attack; 50,000: frames per second Mythbusters films in HD; 3,900: pages Paul Klee’s Personal Notebooks; 1 terabit: satellites deliver in-flight Internet access at hundreds of megabits per second; 18%: overall mobile market revenue increase; 21 TB: amount of date the BBC writes daily to S3; $300 million: Snapchat revenue; 

  • Quotable Quotes:
    • Dark Territory:  Yes, he told them, the NORAD computer was supposed to be closed, but some officers wanted to work from home on the weekend, so they’d leave a port open.
    • @davefarley77: If heartbeat was a clock cycle, retrieving data from fastest SSD is equivalent to crossing whole of London on foot  @__Abigor__ #qconlondon
    • @fiddur: "Legacy is everything you wrote before lunch." - @russmiles #qconlondon
    • @BarryNL: Persistent memory could be the biggest change to computer architecture in 50 years. #qconlondon
    • @mpaluchowski: "You can tell which services are too big. That's the ones developers don't want to work with." #qconlondon @SteveGodwin
    • @danielbryantuk: "I'm not going to say how big microservices should be, but at the BBC we have converged on about 600 lines of Java" @SteveGodwin #qconlondon
    • Steve Kerr~ What we have to get back to is simple, simpl, simple. That's good enough. The leads to the spectacular. You can't try the spectacular without doing the simple first. Our guys are trying to make the spectacular plays when we just have to make the easy ones. If we don't get that cleaned up we're in big trouble.
    • Dark Territory: a disturbing thought smacked a few analysts inside NSA: Anything we’re doing to them, they can do to us.
    • @andyhedges: ~100k TPS with JDK SSL, then ~500k TPS with netty equivalent on same box. Netty fully uses the server's CPU resources too. #qconlondon
    • Paul Marks: Humanoid robots can’t outsource their brains to the cloud due to network latency
    • @manumarchal: O.5TB generated during each flight by jet engines sensors, used for optimising fuel consumption and accelerating repair #Iot #qconlondon
    • fhe: It's both exciting and eerie [AlphaGo]. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.
    • @jaykreps: "Part of using Google's Cloud is convincing yourself that Google will invest 5+ years in really entering the market"
    • DEAN TAKAHASHI: With just 3 games, Supercell made $924M in profits on $2.3B in revenue in 2015.
    • @anne_e_currie: Even an anti-wrinkle cream liked my tweet about containers at #qconlondon. It's good to see #container appreciation has spread so wide.
    • @KingPrawnBalls: Failure is inevitable. What matters is that u learn from it. Never fail the same way twice! #qconlondon Josh Evans, Director Ops Eng Netflix
    • Quiizlet: Everyone involved unanimously picked GCP. It came down to this: we believe the core technology is better.
    • @KevlinHenney: "I have to change the word 'compassion' to 'derisking the people problem' when dealing with upper management."
      — @kkirk #QConLondon
    • People have said so much good stuff this week it can't all fit in the summary. Please read the whole post to see all the Quotable Quotes.

  • Strange to think the impact movies have had on national security policy. Dark Territory: The Secret History of Cyber War. Ronald Reagan after watching the movie WarGames asked if someone could hack the military. The answer: Yes, the problem is much worse than you think. Did anything happen? Nope. People didn't understand computers back then so they didn't think there was a threat (or opportunity in war). A stance that wouldn't change for over a decade. Admiral John "Mike" McConnell watched Sneakers and came up with a NSA mission statement from a soliloquy in the movie: The world isn’t run by weapons anymore, or energy, or money. It’s run by ones and zeroes, little bits of data. It’s all just electrons. . . . There’s a war out there, old friend, a world war. And it’s not about who’s got the most bullets. It’s about who controls the information: what we see and hear, how we work, what we think. It’s all about the information. 

  • Think about this: Amazon launched S3 on March 14, 2006 and with it they started the cloud revolution. That's just ten years ago! James Hamilton in A Decade of Innovation takes a little trip down memory lane. He lists year by year the major AWS product releases and it's impressive. Contributing to this speed may be how decisions are made: Another interesting aspect of AWS is how product or engineering debates are handled. These arguments come up frequently and are as actively debated at AWS as at any company. These decisions might even be argued with more fervor and conviction at AWS but its data that closes the debates and decisions are made remarkably quickly. At AWS instead of having a “strategy” and convincing customers that is what they really need, we deliver features we find useful ourselves and we invest quickly in services that customers adopt broadly. Good services become great services fast.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


The Simple Leads to the Spectacular


Steve Kerr, head coach of the record setting Golden State Warriors (my local Bay Area NBA basketball team), has this to say about what the team needs to do to get back on track (paraphrased):

What we have to get back to is simple, simple, simple. That's good enough. The simple leads to the spectacular. You can't try the spectacular without doing the simple first. Make the simple pass. Our guys are trying to make the spectacular plays when we just have to make the easy ones. If we don't get that cleaned up we're in big trouble. 

If you play the software game, doesn't this resonate somewhere deep down in your git repository?

If you don't like basketball or despise sports metaphors this is a good place to stop reading. The idea that "The simple leads to the spectacular" is probably the best TLDR of Keep it Simple Stupid I've ever heard.

Software development is fundamentally a team sport. It usually takes a while for this lesson to pound itself into the typical lone wolf developer brain. After experiencing a stack of failed projects I know it took an embarrassingly long time for me to notice this pattern. It's one of those truths that gradually reveals itself over time...

Click to read more ...


Performance Tuning Apache Storm at Keen IO

Hi, I'm Manu Mahajan and I'm a software engineer with Keen IO's Platform team. Over the past year I've focused on improving our query performance and scalability. I wanted to share some things we've learned from this experience in a series of posts.

Today, I'll describe how we're working to guarantee consistent performance in a multi-tenant environment built on top of Apache Storm.

tl;dr we were able to make query response times significantly more consistent and improve high percentile query-duration by 6x by making incremental changes that included isolating heterogenous workloads, making I/O operations asynchronous, and using Stormโ€™s queueing more efficiently.

High Query Performance Variability

Keen IO is an analytics API that allows customers to track and send event data to us and then query it in interesting ways. We have thousands of customers with varying data volumes that can range from a handful of events a day to upwards of 500 million events per day. We also support different analysis types like counts, percentiles, select-uniques, funnels, and more, some of which are more expensive to compute than others. All of this leads to a spectrum of query response times ranging from a few milliseconds to a few minutes.

The software stack that processes these queries is built on top of Apache Storm (and Cassandra and many other layers). Queries run on a shared storm cluster and share CPU, memory, IO, and network resources. An expensive query can easily consume physical resources and slow down simpler queries that would otherwise be quick.

If a simple query takes a long time to execute it creates a really bad experience for our customers. Many of them use our service to power real-time dashboards for their teams and customers, and nobody likes to wait for a page while it's loading.

Measuring Query Performance Variability

Click to read more ...