Peter Norvig's 9 Master Steps to Improving a Program


Inspired by a xkcd comic, Peter Norvig, Director of Research at Google and all around interesting and nice guy, has created an above par code kata involving a regex program that demonstrates the core inner loop of many successful systems profiled on HighScalability.

The original code is at xkcd 1313: Regex Golf, which comes up with an algorithm to find a short regex that matches the winners and not the losers from two arbitrary lists. The Python code is readable, the process is TDDish, and the problem, which sounds simple, but soon explodes into regex weirdness, as does most regex code. If you find regular expressions confusing you'll definitely benefit from Peter's deliberate strategy for finding a regex.

The post demonstrating the iterated improvement of the program is at xkcd 1313: Regex Golf (Part 2: Infinite Problems). As with most first solutions it wasn't optimal. To improve the program Peter recommends the following steps:

Click to read more ...


Stuff The Internet Says On Scalability For February 21st, 2014

Hey, it's HighScalability time (a particularly bountiful week):

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...


Planetary-Scale Computing Architectures for Electronic Trading and How Algorithms Shape Our World

Algorithms are moving out of the Platonic realm and are becoming dynamic first class players in real life. We've seen corporations become people. Algorithms will likely also follow that path to agency.

Kevin Slavin in his intriguing TED talk: How Algorithms Shape Our World, gives many and varied examples of how algorithms have penetrated RL. 

One of his most interesting examples is from a highly technical paper on Relativistic statistical arbitrage, which says to make money on markets you have to be where the people are, the red dots (on the diagram below), which means you have to put servers where the blue dots are, many of which are in the ocean. Here's the diagram from the paper:

Mr. Slavin neatly sums this up by saying:

And it's not the money that's so interesting actually. It's what the money motivates, that we're actually terraforming the Earth itself with this kind of algorithmic efficiency. And in that light, you go back and you look at Michael Najjar's photographs, and you realize that they're not metaphor, they're prophecy. They're prophecy for the kind of seismic, terrestrial effects of the math that we're making. And the landscape was always made by this sort of weird, uneasy collaboration between nature and man. But now there's this third co-evolutionary force: algorithms -- the Boston Shuffler, the Carnival. And we will have to understand those as nature, and in a way, they are.

The introduction to the paper spells out why this is so:

Click to read more ...


Sponsored Post: Couchbase, Tokutek, Logentries, Booking, Apple, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7  

Who's Hiring?

  • Apple is hiring for multiple positions. Imagine what you could do here. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly.
    • Sr Software Engineer. The Emerging Technology team is looking for a highly motivated, detail-oriented, energetic individual with experience in a variety of big data technologies. You will be part of a fast growing, cohesive team with many exciting responsibilities related to Big Data. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Please apply here.  
    • Senior Engineer. We are looking for a team player with focus on designing and developing WWDR’s web-based applications. The successful candidate must have the ability to take minimal business requirements and work pro-actively with cross functional teams to obtain clear objectives that drive projects forward to completion. Please apply here.
    • Software Engineer. We are looking for a team player with focus on designing and developing WWDR’s web-based applications. The successful candidate must have the ability to take minimal business requirements and work pro-actively with cross functional teams to obtain clear objectives that drive projects forward to completion. Please apply here.
    • Quality Assurance Engineer. The iOS Systems team is looking for a Quality Assurance engineer. In this role you will be expected to work hand-in-hand with the software engineering team to find and diagnose software defects. Please apply here.

  • We need awesome people @ - We want YOU! Come design next
    generation interfaces, solve critical scalability problems, and hack on one of the largest Perl codebases. Apply:

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.

Fun and Informative Events

  • Which MongoDB Distribution Should You Use? AOL Benchmark Results - TokuMX vs. MongoDB. March 5th at 1pm ET. It may be easy to choose a NoSQL database, but do you know which distribution is best for you? Which will perform better? Which will scale further? Look before you leap.  Register now.

  • Aerospike Webinar: “Getting the Most out of Your Flash/SSDs”. Tune in to Aerospike's latest webinar, “Getting the Most Out of your Flash/SSDs” at 10am PST Tuesday, Feb. 18 to learn how to select, test and prepare your drives for maximum database performance. Register now. 

Cool Products and Services

  • As one of the fastest growing VoIP services in the world Viber has replaced MongoDB with Couchbase Server, supporting 100,000+ operations per second in the short term and 1,000,000+ operations per second in the long term for their third generation architecture.  See the full story on the Viber switch.

  • Log management made easy with Logentries Billions of log events analyzed every day to unlock insights from the log data the matters to you. Simply powerful search, tagging, alerts, live tail and more for all of your log data. Automated AWS log collection and analytics, including CloudWatch events. 

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • MongoDB Backup Free Usage Tier Announced. We're pleased to introduce the free usage tier to MongoDB Management Service (MMS). MMS Backup provides point-in-time recovery for replica sets and consistent snapshots for sharded systems with minimal performance impact. Start backing up today at

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Click to read more ...


How the Architecture Evolved to 99.999% Availability, 8 Million Visitors Per Day, and 200,000 Requests Per Second

This is a guest post by Dave Hagler Systems Architect at AOL.

The AOL homepages receive more than 8 million visitors per day.  That’s more daily viewers than Good Morning America or the Today Show on television.  Over a billion page views are served each month. has been a major internet destination since 1996, and still has a strong following of loyal users.

The architecture for is in it’s 5th generation.  It has essentially been rebuilt from scratch 5 times over two decades.  The current architecture was designed 6 years ago.  Pieces have been upgraded and new components have been added along the way, but the overall design remains largely intact.  The code, tools, development and deployment processes are highly tuned over 6 years of continual improvement, making the architecture battle tested and very stable.

The engineering team is made up of developers, testers, and operations and totals around 25 people.  The majority are in Dulles, Virginia with a smaller team in Dublin, Ireland.

In general the technology in use are Java, JavaServer Pages, Tomca, Apache, CentOS 5, Git, Jenkins, Selenium, and jQuery.  There are some other technologies which are used outside that stack, but these are the main components.

Design Principles

Click to read more ...


Stuff The Internet Says On Scalability For February 14th, 2014

Hey, it's HighScalability time:

  • 5 billion: Number of phone records NSA collects per day; Facebook: 1.23 billion users, 201.6 billion friend connections, 400 billion shared photos, and 7.8 trillion messages sent since the start of 2012.
  • Quotable Quotes:
    • @ShrikanthSS: people repeatedly underestimate the cost of busy waits
    • @mcclure111: Learning today java․net․URL․equals is a blocking operation that hits the network shook me badly. I don't know if I can trust the world now.
    • @hui_kenneth: @randybias: “3 ways 2 be market leader - be 1st, be best, or be cheapest. #AWS was all 3. Now #googlecloud may be best & is the cheapest.”
    • @thijs: The nice thing about Paper is that we can point out to clients that it took 18 experienced designers and developers two years to build.
    • @neil_conway: My guess is that the split between Spanner and F1 is a great example of Conway's Law.
  • How Facebook built the real-time posts search feature of Graph search. It's a big problem: one billion new posts added every day, the posts index contains more than one trillion total posts, comprising hundreds of terabytes of data. 

  • Chartbeat Engineering shares some of their experiences in two excellent articles: Part 1,  Part 2. Lessons: DNS is not a great means of load balancing traffic; Modifying sysctl values from their defaults can be important to ensure reliability; Graphing metrics is your friend;  Through TCP tuning and utilizing AWS Elastic Load Balancer we were able to decrease our response time by 98.5%, decrease our server footprint by 20% on our front end servers;  Enabling cross-zone load balancing got our request count distribution extremely well balanced;  planning to move from the m1.large instance type to the c3.large.  The c3.large is almost 50% cheaper and gives us more compute units which in turn yields slightly better response times.

  • Creating a resilient organization is a little like getting an allergy shot, you have to ingest a little of what ails you to boost your immune system. That's the idea behind DiRT, Disaster Recovery Testing event. In Weathering the Unexpected is the story of how far Google goes to improve their corporate immune system with disaster scenarios. Disasters can range from a walk-through of a backup restore to a company wide zombie attack simulation. More here and here.

  • 37signals' shows the power of focus by shedding all their products except Basecamp and even renaming themselves to be just Basecamp. A company can can grow wild unless pruned and shaped to let in the maximum amount of sunlight, growing the most and ripest fruit. While a hard prune is common in the orchard, it's not so common in an organization. A very brave move.

  • When I suggested this I was laughed at. So there! Patch Panels in the Sky:A Case for Free-Space Optics in Data Centers: We explore the vision of an all-wireless inter-rack datacenter fabric. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...


Snabb Switch - Skip the OS and Get 40 million Requests Per Second in Lua

Snabb Switch - a toolkit for solving novel problems in networking. If you are building a new packet-processing network appliance then you can use Snabb Switch to get the job done more quickly.

Here's a great impassioned overview from erichocean:

Or, you could just avoid the OS altogether:

Our current engineering target is 1 million writes/sec and > 10 million reads/sec on top of an architecture similar to that, on a single box, to our fully transactional, MVCC database (write do not block reads, and vice versa) that runs in the same process (a la SQLite), which we've also merged with our application code and our caching tier, so we're down to—literally—a single process for what would have been at least three separate tiers in a traditional setup.

The result is that we had to move to measuring request latency in microseconds exclusively. The architecture (without additional application-specific processing) supports a wire-to-wire messaging speed of 26 nanoseconds, or approx. 40 million requests per second. And that's written in Lua!

To put that in perspective, that kind of performance is about 1/3 of what you'd need to be able to do to handle Facebook's messaging load (on average, obviously, Facebook bursts higher than the average at times...).

Point being, the OS is just plain out-of-date for how to solve heavy data plane problems efficiently. The disparity between what the OS can do and what the hardware is capable of delivering is off by a few orders of magnitude right now. It's downright ridiculous how much performance we're giving up for supposed "convenience" today.


Paper: Network Stack Specialization for Performance 

In the scalability is specialization department here is an interesting paper presented at HotNets '13 on high performance networking: Network Stack Specialization for Performance.

The idea is generalizing a service so it fits in the kernel comes at a high performance cost. So move TCP into user space.  The result is a web server with ~3.5x the throughput of Nginx "while experiencing low CPU utilization, linear scaling on multicore systems, and saturating current NIC hardware."

Here's a good description of the paper published on Layer 9:

Click to read more ...


13 Simple Tricks for Scaling Python and Django with Apache from HackerEarth

HackerEarth is a coding skill practice and testing service that in a series of well written articles describes the trials and tribulations of building their site and how they overcame them: Scaling Python/Django application with Apache and mod_wsgi, Programming challenges, uptime, and mistakes in 2013, Post-mortem: The big outage on January 25, 2014, The Robust Realtime Server, 100,000 strong - CodeFactory server, Scaling database with Django and HAProxy, Continuous Deployment System, HackerEarth Technology Stack.

What characterizes these articles and makes them especially helpful is a drive for improvement and an openness towards reporting what didn't work and how they figured out what would work.

As they say, mistakes happen when you are building a complex product with a team of just 3-4 engineers, but investing in infrastructure allowed them to take more breaks, roam the streets of Bangalore while their servers are happily serving thousands of requests every minute, while reaching a 50,000 user base with ease.

Here's a gloss on how they did it:

Click to read more ...


Stuff The Internet Says On Scalability For February 7th, 2014

Hey, it's HighScalability time:

  • 5 billion requests per day: Heroku serves 60,000 requests per second; 500 Petabytes: Backblaze's New Data Center; 25000 simultaneous connections: on a Percona Server

  • How algorithms help determine that shape of our world. First we encode normative rules of an idealized world in algorithms. Second those algorithms help enforce those expectations by nudging humans in to acting accordingly. A fun example is the story of Ed Bolian's Record-Breaking Drive. Ed raced from New York to L.A. at speeds of up to 158 mph, "breaking countless laws – and the previous record, by more than two hours." His approach is one that any nerd would love. He had three radar detectors, two laser jammers, two nav systems, a CB radio, a scanner, two iPhones and two iPads running applications like Waze, lookouts in the back seat scanning for cops, and someone scouting ahead. Awesome! For the moral of the story, they were going so fast Ed said that "AmEx froze my credit card. They didn't think I could've traveled from one station to another as fast as I did." A human didn't act as expected so the human was denied access to the System. And that System is what will mediate all human interactions going forward. When we think of our AI mediated future be aware that it will destroy as many degrees of freedom as it creates.

  • Not what you want to have happen when you've spent $4.5 million on a Super Bowl ad. Maserati’s Ghibli SuperBowl Ad Crashes Maserati Website. They fell back to a YouTube video, which is a good strategy. Still, nice looking car. 

  • AWS cost savings alert: new prices for the M3 instances are now more cost-effective than the M1 instances. Take a look at rebalancing your instance portfolio.

  • Urs Hölzle with a little Google nostalgia: Back in 2000, the main ads database was on a single machine, f41. The ads group was five engineers back then, so we took turns carrying a pager. I had to abort a dinner date one night (it was Outback Steakhouse in Campbell) to come back to the Googleplex because f41 was wedged; Jeff Dean chips in with the story of his first visit to a datacenter

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Click to read more ...