advertise
Friday
Apr032009

Collectl interface to Ganglia - any interest?

It's been awhile since I've said anything about collectl and I wanted to let this group know I'm currently working on an interface to ganglia since I've seen a variety of posts ranging from how much data to log and where to log it as well as which tools/mechanism to use for logging. From my perspective there are essentially 2 camps on the monitoring front - one says to have distributed agents all sending their data to a central point, but don't send too much or too often. The other camp (which is the one I'm in) says do it all locally with a highly efficient data collector, because you need a lot of data (I also read a post in here about logging everything) and you can't possibly monitors 100s or 1Ks of nodes remotely at the granularity necessary to get anything meaningful. Enter collectl and its evolving interface for ganglia. This will allow you to log lots of detailed data on local nodes at the usual 10 sec interval (or more frequent if you prefer) at about 0.1% system overhead while sending a subset at a lower rate to the ganglia gmonds. This would give you the best of both worlds but I don't know if people are too married to the centralized concept to try something different. I don't know how many people who follow this forum have actually tried it, I know at least a few of you have, but to learn more just go to http://collectl.sourceforge.net/ and look at some of documentation or just download the rpm and type 'collectl'. -mark

Click to read more ...

Wednesday
Apr012009

Art of scalability (1) - Scalability principles

Monday
Mar302009

Ebay history and architecture

Ebay[1] Starts in 1995, initial name AuctionWeb (V1) : - very simple architecture - based on perl - no database, for data persistence they used plain files Because of rapid growth they needed to improve their architecture and so V2 (clever name) was born: - replaced perl with C/C++ - started using a database in a master-slave configuration - C++ back-end - XSLT front-end Any request will lead to an XML file being created in C++ and the XLST processor will transform that into html. *pretty sophisticated architecture for the 90s, XLST was cutting-edge back then* ebay v2 That hold out pretty well for a while but in the late 90s ebay experienced an exponential growth. They started having some trouble with outages and needed improvements, so V3 was developed: - based on java - search engine still used C++ - proof that relational databases can scale (aggressive caching) - developed a messaging layer for making a lot of asyncronious calls, they actually ended up being sued because of the delay in which images appear on the site after an item gets posted :-) - although they switched to java the basic principle of generating xml files for each request was still used. ebay v3 Combining the need for a multilingual website with the boom of AJAX technologies and flash applications they started to doubt their XSL system and moved on to what became in 2006 V4 of ebay: - remove everything that could be replaced with java, Ebay loves Java Everything is Java (Code): - Image - Java class - Link - Java class - Javascript - Java classes - Content - Java classes => lot's of code to write, they're using Eclipse for developing. ebay v4 For more details check: Eclipse at Ebay Tailoring Eclipse to the eBay architecture [1] Images and ideas/info are from the links above.

Click to read more ...

Monday
Mar302009

Lavabit Architecture - Creating a Scalable Email Service

Ladar Levison of Lavabit has written an incredible article on how they took a centralized off-the-shelf email server that could handle only few thousand users and built their own custom distributed infrastructure for handling hundreds of thousands of email users. Lavabit processes 70 gigabytes of data per day, is made up of 26 servers, hosts 260,000 email addresses, and processes 600,000 emails a day. That's a lot of email.

Lavabit's mission has a little edge to it too:

Lavabit was founded as a direct reaction to the larger free e-mail services available. We felt it was possible to create an e-mail service that was fast, reliable, feature rich and didn't achieve profitability by prostituting its user base to marketers.

What I really like about this article is that Lavabit has some challenging elements in dealing with different email protocols while being able to scale to a lot of users. There's more going on than just trying to scale out a database. Many products contain complicated bits like this, so it's interesting to see how Ladar handled them. There are lots of useful details that will help anyone build their own system. Putting in this extra work in is what Ladar thinks makes Lavabit different:

One of the ways to gain an advantage over your competition is to invest the time and money needed to build systems that are better than what is easily available to your competition. It is the custom platform we developed that has allowed us to thrive while many other free email companies either stopped offering their service for free, or shut down altogether.

Since Ladar was so thorough I saved article as a separate html file. Please select the visit link to read the entire article. I'd like to thank Ladar again for taking the time and making the effort to document their architecture for the benefit of the community at large to learn from.

Thursday
Mar262009

Performance - When do I start worrying?

A common problem of the application designers is to predict when they need to start worrying about the Architectural/System improvements on their application. Do I need to add more resources? If yes, then how long before I am compelled to do so? The question is not only when but also what. Should I plan to implement a true caching layer on top of my application or do I need to shard my database. Do I need to move to a distributed search infrastructure and if yes when ! Essentially we try to find out the functionalities of the application that will become critical over time.

Click to read more ...

Wednesday
Mar252009

Advertising

Advertising Opportunities on High Scalability

Premium Advertisers:

High Scalability Premium Advertisements are the graphical advertisements located on the left side of the page. We currently have a limit of four premium spots for both the blog and forums. For current ad rates, please contact us at via our contact us page.

Premium Advertiser Rules:

* One of four advertisements on the left * Ad will receive 100% share of page impressions of either the blog or forum section * Ad creative is 125 x 125 pixels * 8K file size limit * No animation or flash ads * We ask for the ads to not be too distracting from the content

Current Availability:

There are ad slots available at this time, feel free to contact us for more information on ad rates and traffic data.

Click to read more ...

Tuesday
Mar242009

Scalability Perspectives #6: Lew Tucker – Virtual Data Centers in the Open Cloud

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Lew Tucker

Lew Tucker is the Vice President and CTO of Sun Microsystems’ Cloud Computing initiative. Lew’s career has been focused on scalable computing and web development. Lew worked at Sun Microsystems through the 1990’s. In 2002, Lew joined Salesforce.com and led the design and implementation of App Exchange. After Salesforce.com, Lew was CTO at Radar Networks, where he focused on the scalable design and build out of its semantic web service.

The Sun Cloud API

Sun has recently announced its open RESTful API for creating and managing cloud resources, including compute, storage, and networking components. Lew and his team is busy implementing Sun's cloud offering. His background and experience from the creation of Salesforce App Exchange has helped the team to create a simple but flexible set of APIs. Lew envisions open, interoperable clouds based on community standards such as the Cloud API.

Your Own Virtual Data Center in the Cloud

Lew Tucker has demonstrated the use of the Sun Cloud by building a Virtual Data Center built of virtual servers, switches and storage. He commented on the announcement in this interview. Check out the information sources below to understand the perspective of Lew Tucker on Cloud Computing and learn more about the relationship of the Sun and the Clouds. The TechCrunch roundtable is especially interesting.

Information Sources

Click to read more ...

Friday
Mar202009

Alternate strategy for database sharding

An alternate strategy for database sharding which avoids queries across different shards and merging results. A central repository of data is maintained for some tables along with other shards. Can be used in calculating top users, recent users, most read etc.

Click to read more ...

Thursday
Mar192009

Product: Redis - Not Just Another Key-Value Store

With the introduction of Redis your options in the key-value space just grew and your choice of which to pick just got a lot harder. But when you think about it, that's not a bad position to be in at all. Redis (REmote DIctionary Server) - a key-value database. It's similar to memcached but the dataset is not volatile, and values can be strings, exactly like in memcached, but also lists and sets with atomic operations to push/pop elements. The key points are: open source; speed (benchmarked performing 110,000 SET operations, and 81,000 GETs, per second); persistence, but in an asynchronous way taking everything in memory; support for higher level data structures and atomic operations. The home page is well organized so I'll spare the excessive-copying-to-make-this-post-longer. For a good overview of Redis take a look at Antonio Cangiano's article: Introducing Redis: a fast key-value database. If you are looking at a way to understand how Redis is different than something like Tokyo Cabinet/Tyrant, Ezra Zygmuntowicz has done a good job explaining their different niches:

Redis has a different use case then tokyo does. I think tokyo is better for long term persistent data storage. But redis is much better as a state/data structure server. Redis is faster then tokyo by quite a bit but is not immediately durable as in the writes to disk happen in the background at certain trigger points so it is possible to lose a little bit of data if the server crashes. But the power of redis comes in its data types. Having LISTS and SETS as well as string values for keys means you can do O(1) push/pop/shift/unshift as well as indexing/slicing into lists. And with the SET data type you can do set intersection in the server. This allows for very cool thigs like storing a set of tags for each key and then querying for the intersection of the set of tags of multiple keys to find out the common set of tags. I'm using this in nanite(an agent based messaging system) for the persistent storage of agent state and routing info. Using the SET data types makes this faster then keeping the state in memory in my ruby processes since can do the SEt intersection routing inside of redis rather then iterating and comparing in ruby: http://gist.github.com/77314. You can also use redis LISTS as a queue to distribute work between multiple processes. Since pushing and popping are atomic you can use it as a shared tuple space. Also you can use a LIST as a circular log buffer by pushing to the end of a list and then doing an LTRIM to trim the list to the max size: redis.list_push_tail('logs', 'some log line..') redis.list_trim('logs', 0, 99). That will keep your circular log buffer at a max of 100 items. With tokyo you can store lists if you use lua on the server, but to push or pop from a list you have to pull the entire list down, alter it and then push it back up to the server. There is no atomic push/pop of just a single item because tokyo cannot store real lists as values so you have to do marshaling of your list to/from a string.
A demo application called Retwis shows how to use Redis to make a scalable Twitter clone, at least of the message posting aspects. There's a lot of good low level detail on how Redis is used in a real application.

Click to read more ...

Wednesday
Mar182009

QCon London 2009: Upgrading Twitter without service disruptions

Evan Weaver from Twitter presented a talk on Twitter software upgrades, titled Improving running components as part of the Systems that never stop track at QCon London 2009 conference last Friday. The talk focused on several upgrades performed since last May, while Twitter was experiencing serious performance problems.

Click to read more ...