« Georeplication: When Bad Things Happen to Good Systems | Main | Stuff The Internet Says On Scalability For December 14, 2012 »

11 Uses For the Humble Presents Queue, er, Message Queue

It's a little known fact that Santa Clause was an early queue innovator. Faced with the problem of delivering a planet full of presents in one night, Santa, in his hacker's workshop, created a Present Distribution System using thousands of region based priority present queues for continuous delivery by the Rudolphs. Rudolphs? You didn't think there was only one Rudolph did you? Presents are delivered in parallel by a cluster of sleighs, each with redundant reindeer in a master-master configuration. Each Rudolph is a cluster leader and they coordinate work using an early and more magical version of the ZooKeeper protocol.

Programmers have followed Santa's lead and you can find a message queue in nearly every major architecture profile on HighScalability. Historically they may have been introduced after a first generation architecture needed to scale up from their two tier system into something a little more capable (asynchronicity, work dispatch, load buffering, database offloading, etc). If there's anything like a standard structural component, like an arch or beam in architecture for software, it's the message queue. 

An article from Iron.io, Top 10 Uses For A Message Queue, has nice summary of why message queues are so dang useful:

  1. Decoupling. Producers and consumers are independent and can evolve and innovate seperately at their own rate. 
  2. Redundancy. Queues can persist messages until they are fully processed.
  3. Scalability. Scaling is acheived simply by adding more queue processors. 
  4. Elasticity & Spikability. Queues soak up load until more resources can be brought online. 
  5. Resiliency. Decoupling implies that failures are not linked. Messages can still be queue even if there's a problem on the consumer side.
  6. Delivery Guarantees. Queues make sure a message will be consumed eventually and even implement higher level properties like deliver at most once.
  7. Ordering Guarantees. Coupled with publish and subscribe mechanisms, queues can be used message ordering guarantees to consumers.
  8. Buffering. A queue acts a buffer between writers and readers. Writers can write faster than readers may read, which helps control the flow of processing through the entire system. 
  9. Understanding Data Flow. By looking at the rate at which messages are processed you can identify areas where performance may be improved. 
  10. Asynchronous Communication. Writers and readers are independent of each other, so writers can just fire and forget will readers can process work at their own leisure.

Here's a bonus use from praptak in a Hacker News thread:

  • Punch through. Ability to route through very restrictive network setups ("galvanically isolated" networks.) MQs, being latency insensitive, can even go over protocols like e-mail.

Related Articles

Reader Comments (4)

Message queues are the best glue for systems of all sizes and shapes.

December 17, 2012 | Unregistered Commentereo

Interesting thanks. Since this was put together by Iron.io, has anyone used IronMQ? What are the advantages over say, Amazon's SQS?

December 18, 2012 | Unregistered CommenterSimos

[disclaimer: I work for Iron.io]

Thanks for the question. IronMQ has many advantages to SQS. The importance of each advantage depends on your use case, but here's a short list:

- FIFO delivery of messages
- Guaranteed one-time delivery
- Much faster latency (if you are on AWS or Rackspace)
- Multi-cloud (AWS, Rackspace, and more coming)
- Better dashboards and reports
- Built-in push capabilities (not a sep service like SNS)
- Queue POST Webhooks (put messages into queues using HTTP webhooks)
- Tight integration with IronWorker (scale-out processing worker system as a service)

We're also quickly innovating the product to add important features. Messaging and queueing is our thing and we plan to be/stay the best.

Thanks for your interest in Iron.io.

Chad Arimura

February 6, 2013 | Unregistered CommenterChad Arimura

IronIO claims FIFO, but there are a number of edge cases that will deliver messages out of order. When I demonstrated it to them, they said "yes, our FIFO is for normal cases only. Also if you care about latency, a client can only enqueue about 35 messages/second.

July 13, 2014 | Unregistered Commenterbp

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>