« GEO-aware traffic load balancing and caching at CNBC.com | Main | Hot Scalability Links for February 4, 2010 »

High Availability Principle : Concurrency Control

One important high availability principle is concurrency control.  The idea is to allow only that much traffic through to your system which your system can handle successfully.  For example: if your system is certified to handle a concurrency of 100 then the 101st request should either timeout, be asked to try later  or wait until one of the previous 100 requests finish.  The 101st request should not be allowed to negatively impact the experience of the other 100 users.  Only the 101st request should be impacted. Read more here...

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (1)

Thanks for the information. I'm sure this principle is relevant to any sphere. Concurrency control can be about business or production. . I came to the conclusion that there are a lot of principles that can be applied to everything.

September 6, 2010 | Unregistered CommenterPdfSE

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>