Stuff The Internet Says On Scalability For November 20th, 2015

Hey, it's HighScalability time:

100 years ago people saw this as our future. We will be so laughably wrong about the future.
  • $24 billion: amount telcos make selling data about you; $500,000: cost of iOS zero day exploit; 50%: a year's growth of internet users in India; 72: number of cores in Intel's new chip; 30,000: Docker containers started on 1,000 nodes; 1962: when the first Cathode Ray Tube entered interplanetary space; 2x: cognitive improvement with better indoor air quality; 1 million: Kubernetes request per second; 

  • Quotable Quotes:
    • Zuckerberg: One of our goals for the next five to 10 years is to basically get better than human level at all of the primary human senses: vision, hearing, language, general cognition. 
    • Sawyer Hollenshead: I decided to do what any sane programmer would do: Devise an overly complex solution on AWS for a seemingly simple problem.
    • Marvin Minsky: Big companies and bad ideas don't mix very well.
    • @mathiasverraes: Events != hooks. Hooks allow you to reach into a procedure, change its state. Events communicate state change. Hooks couple, events decouple
    • @neil_conway: Lamport, trolling distributed systems engineers since 1998. 
    • @timoreilly: “Silicon Valley is the QA department for the rest of the world. It’s where you test out new business models.” @jamescham #NextEconomy
    • Henry Miller: It is my belief that the immature artist seldom thrives in idyllic surroundings. What he seems to need, though I am the last to advocate it, is more first-hand experience of life—more bitter experience, in other words. In short, more struggle, more privation, more anguish, more disillusionment.
    • @mollysf: "We save north of 30% when we move apps to cloud. Not in infrastructure; in operating model." @cdrum #structureconf
    • Alex Rampell: This is the flaw with looking at Square and Stripe and calling them commodity players. They have the distribution. They have the engineering talent. They can build their own TiVo. It doesn’t mean they will, but their success hinges on their own product and engineering prowess, not on an improbable deal with an oligopoly or utility.
    • @csoghoian: The Michigan Supreme Court, 1922: Cars are tools for robbery, rape, murder, enabling silent approach + swift escape.
    • @tomk_: Developers are kingmakers, driving technology adoption. They choose MongoDB for cost, agility, dev productivity. @dittycheria #structureconf
    • Andrea “Andy” Cunningham: You have to always foster an environment where people can stand up against the orthodoxy, otherwise you will never create anything new.
    • @joeweinman: Jay Parikh at #structureconf on moving Instagram to Facebook: only needed 1 FB server for every 3 AWS servers
    • amirmc: The other unikernel projects (i.e. MirageOS and HaLVM), take a clean-slate approach which means application code also has to be in the same language (OCaml and Haskell, respectively). However, there's also ongoing work to make pieces of the different implementations play nicely together too (but it's early days).

  • After a tragedy you can always expect the immediate fear inspired reframing of agendas. Snowden responsible for Paris...really?

  • High finance in low places. The Hidden Wealth of Nations: In 2003, less than a year before its initial public offering in August 2004, Google US transferred its search and advertisement technologies to “Google Holdings,” a subsidiary incorporated in Ireland, but which for Irish tax purposes is a resident of Bermuda.

  • The entertaining True Tales of Engineering Scaling. Started with Rails and Postgres. Traffic jumped. High memory workers on Heroku broke the bank. Can't afford the time to move to AWS. Lots of connection issues. More traffic. More problems. More solutions. An interesting story with many twists. The lesson: Building and, more importantly, shipping software is about the constant trade off of forward movement and present stability.

  • 5 Tips to Increase Node.js Application Performance: Implement a Reverse Proxy Server; Cache Static Files; Implement a Node.js Load Balancer; Proxy WebSocket Connections; Implement SSL/TLS and HTTP/2.

  • Docker adoption is not that easy, Uber took months to get up and running with Docker. How Docker Turbocharged Uber’s Deployments: Everything just changes a bit, we need to think about stuff differently...You really need to rethink all of the parts of your infrastructure...Uber recognizes that Docker removed team dependencies, offering more freedom because members were no longer tied to specific frameworks or specific versions. Framework and service pawners are now able to experiment with new technologies and to manage their own environments.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


Free Book: Practical Scalablility Analysis with the Universal Scalability Law

If you are very comfortable with math and modeling Dr. Neil Gunther'Universal Scalability Law is a powerful way of predicting system performance and whittling down those bottlenecks. If not, the USL can be hard to wrap your head around.

There's a free eBook for that. Performance and scalability expert Baron Schwartz, founder of VividCortex, has written a wonderful exploration of scalability truths using the USL as a lens: Practical Scalablility Analysis with the Universal Scalability Law

As a sample of what you'll learn, here are some of the key takeaways from the book:

  • Scalability is a formal concept that is best defined as a mathematical function.
  • Linear scalability means equal return on investment. Double down on workers and you’ll get twice as much work done; add twice as many nodes and you’ll increase the maximum capacity twofold. Linear scalability is oft claimed but seldom delivered.
  • Systems scale sublinearly because of contention, which adds queueing delay, and crosstalk, which inflates service times. The penalty for contention grows linearly and the crosstalk penalty grows quadratically. (An alternative to the crosstalk theory is that longer queues are more costly to manage.)
  • Contention causes throughput to asymptotically approach the reciprocal of the serialized fraction of the workload. If your workload is 5% serialized you’ll never grow the effective speedup by more than 20-fold
  • Crosstalk causes the system to regress. The harder you try to push systems with crosstalk, the more time they spend fighting amongst themselves.
  • To build scalable systems, avoid contention (serialization) and crosstalk (synchronization). The contention and crosstalk penalties degrade system scalability and performance much faster than you’d think. Even tiny amounts of serialization or pairwise data synchronization cause big losses in efficiency.
  • If you can’t avoid crosstalk, partition (shard) into smaller systems that will lose less efficiency by avoiding the explosion of service times at larger sizes.
  • To model systems with the USL, obtain measurements of throughput at various levels of load or size, and use regression to estimate the parameters to Equation 3.
  • To forecast scalability beyond what’s observable, be pessimistic and treat the USL as a best-case scenario that won’t really happen. Use Equation 4 to forecast the maximum possible throughput, but don’t forecast too far out. Use Equation 6 to forecast response time.
  • Use your judgment to predict limitations that USL can’t see, such as saturation of network bandwidth or changes in the system’s model when all of the CPUs become busy
  • Use the USL to explain why systems aren’t scaling well. Too much queueing? Too much crosstalk? Treat the USL as a pessimistic model and demand that your systems scale at least as well as it does.
  • If you see superlinear scaling, check your measurements and how you’ve set up the system under test. In most cases σ should be positive, not negative. Make sure you’re not varying the system’s dimensions relative to each other and creating apparent superlinear efficiencies that don’t really exist.
  • It’s fun to fantasize about models that might match observed system behavior more closely than the USL, but the USL arises analytically from how we know queueing systems work. Invented models might not have any basis in reality. Besides, the USL usually models systems extremely well up to the point of inflection, and modeling what happens beyond that isn’t as interesting as knowing why it happens.
  • Never trust a scatterplot with an arbitrary curve fit through it unless you know why that’s the right curve. Don’t confuse the USL, hockey stick charts from queueing theory, or other charts that just happen to have similar shapes. Know what shape various plots should exhibit, and suspect bad measurements or other mistakes if you don’t see them.

Note, the link to the eBook requires entering some data, but it's free, well written, and useful, so it's probably worth it.

Related Articles


9ish Low Latency Strategies for SaaS Companies

Achieving very low latencies takes special engineering, but if you are a SaaS company latencies of a few hundred milliseconds are possible for complex business logic using standard technologies like load balancers, queues, JVMs, and rest APIs.

Itai Frenkel, a software engineer at Forter, which provides a Fraud Prevention Decision as a Service, shows how in an excellent article: 9.5 Low Latency Decision as a Service Design Patterns.

While any article on latency will have some familiar suggestions, Itai goes into some new territory you can really learn from. The full article is rich with detail, so you'll want to read it, but here's a short gloss:

Click to read more ...


How Facebook's Safety Check Works

I noticed on Facebook during this horrible tragedy in Paris that there was some worry because not everyone had checked in using Safety Check (video). So I thought people might want to know a little more about how Safety Check works.

If a friend or family member hasn't checked-in yet it doesn't mean anything bad has happened to them. Please keep that in mind. Safety Check is a good system, but not a perfect system, so keep your hopes up.

This is a really short version, there's a longer article if you are interested.

When is Safety Check Triggered?

  • Before the Paris attack Safety Check was only activated for natural disasters. Paris was the first time it was activated for human disasters and they will be doing it more in the future. As a sign of this policy change, Safety Check has been activated for the recent bombing in Nigeria.

How Does Safety Check Work?

  • If you are in an area impacted by a disaster Facebook will send you a push notification asking if you are OK. 

  • Tapping the “I’m Safe” button marks that your are safe.

  • All your friends are notified that you are safe.

  • Friends can also see a list of all the people impacted by the disaster and how they are doing.

How is the impacted area selected?

  • Since Facebook only has city-level location for most users, declaring the area isn't as hard as drawing on a map. Facebook usually selects a number of cities, regions, states, or countries that are affected by the crisis.

  • Facebook always allows people to declare themselves into the crisis (or out) in case the geolocation prediction is inaccurate. This means Facebook can be a bit more selective with the geographic area, since they want a pretty high signal with the notifications. Notification click-through and conversion rates are used as downstream signals on how well a launch went.

  • For something like Paris, Facebook selected the whole city and launched. Especially with the media reporting "Paris terror attacks," this seemed like a good fit.

How do you build the pool of people impacted by a disaster in a certain area?

  • Building a geoindex is the obvious solution, but it has weaknesses.

  • People are constantly moving so the index will be stale.

  • A geoindex of 1.5 billion people is huge and would take a lot of resources they didn’t have. Remember, this is a small team without a lot of resources trying to implement a solution.

  • Instead of keeping a data pipeline that’s rarely used active all of the time, the solution should work only when there is an incident. This requires being able to make a query that is dynamic and instant.

  • Facebook does not have GPS-level location information for the majority of its user base (only those that turn on the nearby friends feature), so they use the same IP2Geo prediction algorithms that Google and other web companies use -- essentially determining city level location based on IP address.

The solution leveraged the shape of the social graph and its properties:

  • When there’s a disaster, say an earthquake in Nepal, a hook for Safety Check is turned on in every single news feed load.

  • When people check their news feed the hook executes. If the person checking their news feed is not in Nepal then nothing happens.

  • When someone in Nepal checks their news feed is when the magic happens.

  • Safety Check fans out to all their friends on their social graph. If a friend is in the same area then a push notification is sent asking if they are OK.

  • The process keeps repeating recursively. For every friend found in the disaster area a job is spawned to check their friends. Notifications are sent as needed.

In Practice this Solution Was Very Effective

  • At the end of the day it's really just DFS (Depth First Search) with seen state and selective exploration.

  • The product experience feels live and instant because the algorithm is so fast at finding people. Everyone in the same room, for example, will appear to get their notifications at the same time. Why?

  • Using the news feed gives a random sampling of users that is biased towards the most active users with the most friends. And it filters out inactive users, which is billions of rows of computation which need not be performed.

  • The graph is dense and interconnectedSix Degrees of Kevin Bacon is wrong, at least on Facebook. The average distance between any two of Facebook’s 1.5 billion users is 4.74 edges. Sorry Kevin. With 1.5 billion users the whole graph can be explored within 5 hops. Most people can be efficiently reached by following the social graph.

  • There’s a lot of parallelism for free using a social graph approach. Friends can be assigned to different machines and processed in parallel. As can their friends, and so on.

  • Isn't it possible to use something like Hadoop/Hive/Presto to simply get a list of all users in Paris on demand? Hive and Hadoop are offline. It can take ~45 minutes to execute a query on Facebook's entire user table (even longer if it involves joins) and certain times of the day its slower (during work hours usually). Not only that, but once the query executes some engineer has to go copy and paste into a script that would likely run on one machine. Doing this in a distributed async job fashion allowed for a lot more flexibility. Even better, it's possible to change the geographic area as the algorithm runs and those changes are reflected immediately. 

  • The cost of searching for the users in the area directly correlates with the size of the crisis (geographically). A smaller crises ends up being fairly cheap, whereas larger crises end up checking on a larger and larger portion of the userbase until 100% of the user base is reached. For Nepal, a big disaster, ~1B profiles were checked. For some smaller launches only ~100k profiles were checked. Had an index been used, or an offline job that did joins and filters, the cost would be constant, no matter how small the crisis.

On HackerNews


Stuff The Internet Says On Scalability For November 13th, 2015

Hey, it's HighScalability time:

Gorgeous picture of where microbes live in species. Humans have the most. (M. WARDEH ET AL)

  • 14.3 billion: Alibaba single day sales; 1.55 billion: Facebook monthly active users; 6 billion: Snapchat video views per day; unlimited: now defined as 300 GB by Comcast; 80km: circumference of China's proposed supercolider; 500: alien worlds visualized; 50: future sensors per acre on farms; 1 million: Instagram requests per second.

  • Quotable Quotes:
    • Adam Savage~ Lesson learned: do not test fire rockets indoors.
    • dave_sullivan: I'm going to say something unpopular, but horizontally-scaled deep learning is overkill for most applications. Can anyone here present a use case where they have personally needed horizontal scaling because a Titan X couldn't fit what they were trying to do? 
    • @bcantrill: Question I've been posing at #KubeCon: are we near Peak Confusion in the container space? Consensus: no -- confusion still accelerating!
    • @PeterGleick: When I was born, CO2 levels were  ~300 ppm. This week may be the last time anyone alive will see less than 400 ppm. 
    • @patio11: "So I'm clear on this: our business is to employ people who can't actually do worthwhile work, train them up, then hand to competition?"
    • Settlement-Size: This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies
    • wingolog: for a project to be technically cohesive, it needs to be socially cohesive as well; anything else is magical thinking.
    • @mjpt777: Damn! @toddlmontgomery has got Aeron C++ IPC to go at over 30m msg/sec. Java is struggling to keep up.
    • Tim O'Reilly: While technological unemployment is a real phenomenon, I think it's far more important to look at the financial incentives we've put in place for companies to cut workers and the cost of labor. If you're a public company whose management compensation is tied your stock price, it's easy to make short term decisions that are good for your pocketbook but bad long term for both the company and for society as a whole.
    • @RichardDawkins: Evolution is "Descent with modification". Languages, computers and fashions evolve. Solar systems, mountains and embryos don't. They develop
    • @Grady_Booch: Dispatches from a programmer in the year 2065: "How do you expect me to fit 'Hello, World' into only a terabyte of memory?" via Joe Marasco
    • @huntchr: I find #Zookeeper to be the Achilles Heal of a few otherwise interesting projects e.g. #kafka, #mesos.
    • Robert Scoble~ Facebook Live was bringing 10x more viewers than Twitter/Periscope
    • cryptoz: I've always wondered about this. Presumably the people leading big oil companies are not dumb idiots; so why wouldn't they take this knowledge and prepare in advance?

  • Waze is using data from sources you may not expect. Robert Scoble: How about Waze? I witnessed an accident one day on the highway near my house. Two lane road. The map turned red within 30 seconds of the accident. How did that happen? Well, it turns out cell phone companies (Verizon, in particular, in the United States) gather real time data from cell phones. Your phone knows how fast it’s going. In fact, today, Waze shows you that it knows. Verizon sells that data (anonymized) to Google, which then uses that data to put the red line on your map.

  • If email would have been done really right in the early days then we wouldn't need half the social networks or messaging apps we have today. Almost everything we see is a reimplementation of email. Gmail, We Need To Talk.

  • Don Norman and Bruce Tognazzini, prophets from Apple's time in the wilderness, don't much like the new religion. They stand before the temple shaking fists at blasphemy. How Apple Is Giving Design A Bad Name: Apple is destroying design. Worse, it is revitalizing the old belief that design is only about making things look pretty. No, not so! Design is a way of thinking, of determining people’s true, underlying needs, and then delivering products and services that help them. Design combines an understanding of people, technology, society, and business. 

  • There's a new vision of the Internet out there and it's built around the idea of Named Data Networking (NDN). It's an evolution from today’s host-centric network architecture IP to a data-centric network architecture. Luminaries like Van Jacobson like the idea. Packet Pushers with good coverage in Show 262 – Future of Networking – Dave Ward. Dave Ward is the CTO of Engineering and Chief Architect at Cisco. For me, make the pipes dumb, fast, and secure. Everything else is emergent.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


Sponsored Post:, Digit, iStreamPlanet, Instrumental, Redis Labs,, SignalFx, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • Senior Devops Engineer - is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • Digit Game Studios, Irish’s largest game development studio, is looking for game server engineers to work on existing and new mobile 3D MMO games. Our most recent project in development is based on an iconic AAA-IP and therefore we expect very high DAU & CCU numbers. If you are passionate about games and if you are experienced in creating low-latency architectures and/or highly scalable but consistent solutions then talk to us and apply here.

  • As a Networking & Systems Software Engineer at iStreamPlanet you’ll be driving the design and implementation of a high-throughput video distribution system. Our cloud-based approach to video streaming requires terabytes of high-definition video routed throughout the world. You will work in a highly-collaborative, agile environment that thrives on success and eats big challenges for lunch. Please apply here.

  • As a Scalable Storage Software Engineer at iStreamPlanet you’ll be driving the design and implementation of numerous storage systems including software services, analytics and video archival. Our cloud-based approach to world-wide video streaming requires performant, scalable, and reliable storage and processing of data. You will work on small, collaborative teams to solve big problems, where you can see the impact of your work on the business. Please apply here.

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.

Fun and Informative Events

  • Your event could be here. How cool is that?

Cool Products and Services

  • Instrumental is a hosted real-time application monitoring platform. In the words of one of our customers: "Instrumental is the first place we look when an issue occurs. Graphite was always the last place we looked." - Dan M

  • Real-time correlation across your logs, metrics and events. just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Click to read more ...


A 360 Degree View of the Entire Netflix Stack

This is a guest repost by Chris Ueland, creator of Scale Scale, with a creative high level view of the Netflix stack.

As we research and dig deeper into scaling, we keep running into Netflix. They are very public with their stories. This post is a round up that we put together with Bryan’s help. We collected info from all over the internet. If you’d like to reach out with more info, we’ll append this post. Otherwise, please enjoy!

–Chris / ScaleScale / MaxCDN

A look at what we think is interesting about how Netflix Scales

Click to read more ...


Stuff The Internet Says On Scalability For November 6th, 2015

Hey, it's HighScalability time:

Cool geneology of Relational Database Management Systems.

  • 9,000: Artifacts Uncovered in California Desert; 400 Million: LinkedIn members; 100: CEOs have more retirement assets than 41% of American families; $160B: worth of AWS; 12,000: potential age of oldest oral history; fungi: world's largest miners 

  • Quotable Quotes:
    • @jaykreps: Someone tell @TheEconomist that people claiming you can build Facebook on top of a p2p blockchain are totally high.
    • Larry Page: I think my job is to create a scale that we haven't quite seen from other companies. How we invest all that capital, and so on.
    • Tiquor: I like how one of the oldest concepts in programming, the ifdef, has now become (if you read the press) a "revolutionary idea" created by Facebook and apparently the core of a company's business. I'm only being a little sarcastic.
    • @DrQz: +1 Data comes from the Devil, only models come from God. 
    • @DakarMoto: Great talk by @adrianco today quote of the day "i'm getting bored with #microservices, and I’m getting very interested in #teraservices.”
    • @adrianco: Early #teraservices enablers - Diablo Memory1 DIMMs, 2TB AWS X1 instances, in-memory databases and analytics...
    • @PatrickMcFadin: Average DRAM Contract Price Sank Nearly 10% in Oct Due to Ongoing Supply Glut. How long before 1T memory is min?
    • @leftside: "Netflix is a monitoring system that sometimes shows people movies." --@adrianco #RICON15
    • Linus: So I really see no reason for this kind of complete idiotic crap.
    • Jeremy Hsu: In theory, the new architecture could pack about 25 million physical qubits within an array that’s 150 micrometers by 150 µm. 
    • @alexkishinevsky: Just done AWS API Gateway HTTPS API, AWS Lambda function to process data straight into AWS Kinesis. So cool, so different than ever before.
    • @highway_62: @GreatDismal Food physics and candy scaling is a real thing. Expectations and ratios get blown. Mouth feel changes.
    • @randybias:  #5 you can’t get automation scaling without relative homogeneity (homologous) and that’s why the webscale people succeeded
    • Brian Biles: Behind it all: VMs won.  The only thing that kept this [Server Centric Storage is Killing Arrays] from happening a long time ago was OS proliferation on physical servers in the “Open Systems” years.  Simplifying storage for old OS’s required consolidative arrays with arbitrated-standard protocols.
    • @paulcbetts: This disk is writing at almost 1GB/sec and reading at ~2.2GB/sec. I remember in 2005 when I thought my HD reading at 60MB/sec was hot shit.
    • @merv: One of computing’s biggest challenges for architects and designers: scaling is not distributed uniformly in time or space.

  • To Zookeeper or to not Zookeeper? This is one of the questions debated on an energetic mechanical-sympathy thread. Some say Zookeeper is an unreliable and difficult to manage. Others say Zookeeper works great if carefully tended. If you need a gossip/discovery service there are alternatives: JGroups, Raft, Consul, Copycat.

  • Algorithms are as capable of tyranny as any other entity wielding power. Twins denied driver’s permit because DMV can’t tell them apart

  • Odd thought. What if Twitter took stock or options as payment for apps that want to use Twitter as a platform (not Fabric)? The current user caps would effectively be the free tier. If you want to go above that you can pay. Or you can exchange stock or options for service. This prevents the Yahoo problem of being King Makers, that is when Google becomes more valuable than you. It gives Twitter potential for growth. It aligns incentives because Twitter will be invested in the success of apps that use it. And it gives apps skin in the game. Although Twitter has to recognize the value of the stock they receive as revenue, they can offset that against previous losses.

  • One of the best stories ever told. Her Code Got Humans on the Moon—And Invented Software Itself: MARGARET HAMILTON WASN’T supposed to invent the modern concept of software and land men on the moon...But the Apollo space program came along. And Hamilton stayed in the lab to lead an epic feat of engineering that would help change the future of what was humanly—and digitally—possible. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Click to read more ...


Strategy: Avoid Lots of Little Files

I've been bitten by this one. It happens when you quite naturally use the file system as a quick and dirty database. A directory is a lot like a table and a file name looks a lot like a key. You can store many-to-one relationships via subdirectories. And the path to a file makes a handy quick lookup key. 

The problem is a file system isn't a database. That realization doesn't hit until you reach a threshold where there are actually lots of files. Everything works perfectly until then.

When the threshold is hit iterating a directory becomes very slow because most file system directory data structures are not optimized for the lots of small files case. And even opening a file becomes slow.

According to Steve Gibson on Security Now (@16:10) LastPass ran into this problem. LastPass stored every item in their vault in an individual file. This allowed standard file syncing technology to be used to update only the changed files. Updating a password changes just one file so only that file is synced.

Steve thinks this is a design mistake, but this approach makes perfect sense. It's simple and robust, which is good design given, what I assume, is the original reasonable expectation of relatively small vaults.

The problem is the file approach doesn't scale to larger vaults with thousands of files for thousands of web sites. Interestingly, decrypting files was not the bottleneck, the overhead of opening files became the problem. The slowdown was on the elaborate security checks the OS makes to validate if a process has the rights to open a file.

The new version of 1Password uses a UUID to shard items into one of 16 files based on the first digit of the UUID. Given good random number generation the files should grow more or less equally as items are added. Problem solved. Would this be your first solution when first building a product? Probably not.

Apologies to 1Password if this is not a correct characterization of their situation, but even if wrong, the lesson still remains.

Click to read more ...


Paper: Coordination Avoidance in Distributed Databases By Peter Bailis

Peter Bailis has released the work of a lifetime, his dissertion is now available online: Coordination Avoidance in Distributed Databases.

The topic Peter is addressing is summed up nicely by his thesis statement: 

Many semantic requirements of database-backed applications can be efficiently enforced without coordination, thus improving scalability, latency, and availability.

I'd like to say I've read the entire dissertation and can offer cogent insightful analysis, but that would be a lie. Though I have watched several of Peter's videos (see Related Articles). He's doing important and interesting work, that as much University research has done, may change the future of what everyone is doing.

From the introduction:

The rise of Internet-scale geo-replicated services has led to upheaval in the design of modern data management systems. Given the availability, latency, and throughput penalties associated with classic mechanisms such as serializable transactions, a broad class of systems (e.g., “NoSQL”) has sought weaker alternatives that reduce the use of expensive coordination during system operation, often at the cost of application integrity. When can we safely forego the cost of this expensive coordination, and when must we pay the price?

In this thesis, we investigate the potential for coordination avoidance—the use of as little coordination as possible while ensuring application integrity—in several modern dataintensive domains. We demonstrate how to leverage the semantic requirements of applications in data serving, transaction processing, and web services to enable more efficient distributed algorithms and system designs. The resulting prototype systems demonstrate regular order-of-magnitude speedups compared to their traditional, coordinated counterparts on a variety of tasks, including referential integrity and index maintenance, transaction execution under common isolation models, and database constraint enforcement. A range of open source applications and systems exhibit similar results.

Related Articles