advertise
Thursday
Sep092010

How did Google Instant become Faster with 5-7X More Results Pages?

We don't have a lot of details on how Google pulled off their technically very impressive Google Instant release, but in Google Instant behind the scenes, they did share some interesting facts:

  • Google was serving more than a billion searches per day.
  • With Google Instant they served 5-7X more results pages than previously.
  • Typical search results were returned in less than a quarter of second.
  • A team of 50+ worked on the project for an extended period of time.

Although Google is associated with muscular data centers, they just didn't throw more server capacity at the problem, they worked smarter too. What were their general strategies?

Click to read more ...

Thursday
Sep092010

6 Scalability Lessons

Jesper Söderlund not only put together a few interesting scalability patterns, he also came up with a few interesting scalability lessons:

  • Lesson #1. Put Smarty compile and template caches on an active-active DRBD cluster with high load and your servers will DIE!
  • Lesson #2. Don't use out-of-the-box configurations.
  • Lesson #3. Single points of contention will eventually become a bottleneck.
  • Lesson #4. Plan in advance. 
  • Lesson #5. Offload your databases as much as possible.
  • Lesson #6. File systems matter and can run out of space / inodes.

For more details and explanations see the original post.

Wednesday
Sep082010

4 General Core Scalability Patterns

Jesper Söderlund put together an excellent list of four general scalability patterns and four subpatterns in his post Scalability patterns and an interesting story:

  • Load distribution - Spread the system load across multiple processing units
    • Load balancing / load sharing - Spreading the load across many components with equal properties for handling the request
    • Partitioning - Spreading the load across many components by routing an individual request to a component that owns that data specific
      • Vertical partitioning - Spreading the load across the functional boundaries of a problem space, separate functions being handled by different processing units
      • Horizontal partitioning - Spreading a single type of data element across many instances, according to some partitioning key, e.g. hashing the player id and doing a modulus operation, etc. Quite often referred to as sharding.
  • Queuing and batch - Achieve efficiencies of scale by processing batches of data, usually because the overhead of an operation is amortized across multiple request
  • Relaxing of data constraints - Many different techniques and trade-offs with regards to the immediacy of processing / storing / access to data fall in this strategy
  • Parallelization - Work on the same task in parallel on multiple processing units

For more details and explanations see the original post.

Tuesday
Sep072010

Sponsored Post: deviantART, Okta, CloudSigma, ManageEngine, Site24x7

Who's Hiring?

  • deviantART is Hiring a Senior Software Engineer.
  • Okta is hiring! Okta provides a ground-breaking cloud adoption and management solution and they are looking for people in many different areas.

Cool Products and Services

Click to read more ...

Sunday
Sep052010

Hilarious Video: Relational Database vs NoSQL Fanbois

This is so funny I laughed until I cried! Definitely NSFW. OMG it's hilarious, but it's also not a bad overview of the issues. Especially loved: You read the latest post on HighScalability.com and think you are a f*cking Google and architect and parrot slogans like Web Scale and Sharding but you have no idea what the f*ck you are talking about. There are so many more gems like that.

Thanks to Alex Popescu for posting this on MongoDB is Web Scale. Whoever made this deserves a Webby.

Friday
Sep032010

Hot Scalability Links For Sep 3, 2010

 With summer almost gone, it's time to fall into some good links...

  • Hibari - distributed, fault tolerant, highly available key-value store written in Erlang. In this video Scott Lystig Fritchie gives a very good overview of the newest key-value store. 
  • Tweets of Gold
    • lenidot: with 12 staff, @tumblr serves 1.5billion pageviews/month and 25,000 signups/day. Now that's scalability!
    • jmtan24: Funny that whenever a high scalability article comes out, it always mention the shared nothing approach
    • mfeathers: When life gives you lemons, you can have decades-long conquest to convert lemons to oranges, or you can make lemonade.
    • OyvindIsene: Met an old man with mustache today, he had no opinion on #noSQL. Note to myself: Don't grow a mustache, now or later. 
    • vlad003: Isn't it interesting how P2P distributes data while Cloud Computing centralizes it? And they're both said to be the future.
  • You may be interested in a new DevOps Meetup organized by Dave Nielson, so you know it will be good.

Click to read more ...

Friday
Sep032010

Six guiding principles to Consolidate your IT 

The need for IT consolidation is most evident in two types of organizations. In the first group, IT grew organically with business over the decades, and survived changes of strategy, management, staff and vendor orientation. The second group of businesses capital groups are characterized by rapid growth through acquisitions (followed by attempts to integrate radically different IT environments). In both groups, their IT infrastructures have typically been pieced together over the past 20 (or more) years.

Read more on BigDataMatters.com

Thursday
Sep022010

Distributed Hashing Algorithms by Example: Consistent Hashing

Consistent Hashing is a specific implementation of hashing that is well suited for many of today’s web-scale load balancing problems. Specifically, it can be seen in use in various caching solutions like Memcached and is applicable to NoSQL solutions as well. Consistent Hashing is used particularly because it provides a solution for the typical “hashcode mod n” method of distributing keys across a series of servers. It does this by allowing servers to be added or removed without significantly upsetting the distribution of keys, nor does it require that all keys be rehashed to accommodate the change in the number of servers.

You can read the full store here.

Wednesday
Sep012010

Scale-out vs Scale-up

In this post I'll cover the difference between multi-core concurrency that is often referred to as Scale-Up and distributed computing that is often referred to as Scale-Out mode. 

more..

Wednesday
Sep012010

Paper: The Case for Determinism in Database Systems  

Can you have your ACID cake and eat your distributed database too? Yes explains Daniel Abadi, Assistant Professor of Computer Science at Yale University, in an epic post, The problems with ACID, and how to fix them without going NoSQL, coauthored with Alexander Thomson, on their paper The Case for Determinism in Database Systems. We've already seen VoltDB offer the best of both worlds, this sounds like a completely different approach.

The solution, they propose, is: 

Click to read more ...