Entries from October 7, 2007 - October 13, 2007


How Flickr Handles Moving You to Another Shard

Colin Charles has cool picture showing Flickr's message telling him they'll need about 15 minutes to move his 11,500 images to another shard. One, that's a lot of pictures! Two, it just goes to show you don't have to make this stuff complicated. Sure, it might be nice if their infrastructure could auto-balance shards with no down time and no loss of performance, but do you really need to go to all the extra complexity? The manual system works and though Colin would probably like his service to have been up, I am sure his day will still be a pleasant one.

Click to read more ...


WAN Accelerate Your Way to Lightening Fast Transfers Between Data Centers

How do you keep in sync a crescendo of data between data centers over a slow WAN? That's the question Alberto posted a few weeks ago. Normally I'm not into all boy bands, but I was frustrated there wasn't a really good answer for his problem. It occurred to me later a WAN accelerator might help turn his slow WAN link into more of a LAN, so the overhead of copying files across the WAN wouldn't be so limiting. Many might not consider a WAN accelerator in this situation, but since my friend Damon Ennis works at the WAN accelerator vendor Silver Peak, I thought I would ask him if their product would help. Not surprisingly his answer is yes! Potentially a lot, depending on the nature of your data. Here's a no BS overview of their product:

  • What is it? - Scalable WAN Accelerator from Silver Peak (
  • What does it do? - You can send 5x-100x times more data across your expensive, low-bandwidth WAN link.
  • Why should you care? - Your data centers become more like co-located real-time peers. - You can sync a lot more media and other large files across data centers. 50x improvement in data replication performance over a WAN. - You may be able to operate on remote database more like a local database. 5x-20x improvement is SQL data manipulation and unique query performance. - A 2 hour database backup would take 4 minutes. 10x-30x improvement in transferring large data sets over SQL. A good disaster planning feature.
  • How does it work? - You buy an accelerator appliance for both sides of you link. All your WAN traffic flows through these boxes. - The appliances then use various techniques to effectively decrease latency and increase bandwidth across the link: -- Traffic reduction. Accelerators look for patterns in data across a link, caching the data on either side of the link, and then not sending the data when similar patterns are seen again. This can lead to a 90% reduction in traffic. -- Compression. Data are compressed across the link. compression ratios from 0 to 2-5x are seen, depending on the content type. -- TCP Manipulation. The TCP/IP protocol is gamed to yield better performance. For example, a proxy on both sides is used to get a bigger window size. -- Application Manipulation. Various application protocols, like CIFS, NFS, and Outlook, can be gamed to improve performance.
  • How much does it cost? - $10k to $130k per box. $10k for the 2Mbps appliance and $130k for the 500Mbps. - They are the scale leaders and are specifically good at "high-end" (> 50Mbps) replication.
  • Who uses it? - Fidelity Bank, Ernst & Young, Panasonic.
  • Is it for real? - Yes. It works and is installed and running in many data centers.
  • How do you get it? - Contact sales at
  • Where do you go for more information? - White paper Directory - - Understanding WAN Acceleration Techniques -
  • Is there anything else interesting you should know? - The appliance performs encryption and compression so you don't need perform those functions on your own CPUs. - The appliances fail to wire so if a box fails traffic passes unaccelerated. If you can't live with that you need to buy 2 boxes per end of the link (4 boxes total).
  • How much will you benefit? - The more duplication in your data the better job they can do. There's tons of duplicated data in a database feed , for example, so they can really help supercharge database performance. - Latency/time improvements depend on the link. The higher the latency the link has the less bandwidth you can use. For example, a 100ms link is limited to 5Mbps throughput per flow due to the TCP window size (64KB/100ms ~ 5Mbps). They can take this to several hundred Mbps per flow. - Image files are often pre-compressed. As compression removes duplicate information they can't be as efficient at the de-duplication as in other scenarios, though they can still improve throughput. An interesting side-effect of speeding up the WAN link is that it often reveals bottlenecks in other parts of the system. A slow WAN might be hiding:
  • Underpowered servers. Servers that could process a trickle of data may be overwhelmed by a flood of data.
  • Slow applications. Apps that could pump data at slow WAN speeds may not be able drive a faster WAN. You may need to take a look at your software architecture or storage network.
  • Underpowered server links. Accelerate a 2mbps link to a 20mbps link and your network infrastructure on the data center side may not be able to handle the truth. Obviously the cost of the solution means its targeted more for moderate sized companies or a service provider offering their customers a quality upsell. But if you are stuck wondering how the heck you are going to squeeze more bits between your data centers, it may be just the magic bullet you need.

    Click to read more ...

  • Tuesday

    High Load on production Webservers after Sourcecode sync

    Hi everybody :) We have a bunch of webservers (about 14 at this time) running Apache. As application framework we're using PHP with the APC Cache installed to improve performance. For load balancing we're using a Big F5 system with dynamic ratio (SNMP driven) To sync new/updated sourcecode we're using subversion to "automaticly" update these servers with our latest software relases. After updating the new source to these production servers the load of the mashines is raising to hell. While updating the servers, they are still in "production", serving webpages to the users. Otherwise the process of updating would take ages. Most of the time we're only updating in the morning hours while less users are online, because of the above issue. My guess is, that the load is raising that high, because APC needs to recompile a bunch of new files each time. Before and while compiling the performance simply is "bad". My goal would be to find a better solution. We want to "sync" code no matter how many users are online (in case of emergency) without taking the whole site down. How you're handling this issues ? What do you think about the process above ? Do you may find the "problem" ? Do you have similiar issues ? Feedback is highly welcome :) Greetings, Stephan Tijink Head of Web Development | fotocommunity GmbH & Co. KG | Rheinwerkallee 2 | 53227 Bonn

    Click to read more ...


    Paper: Understanding and Building High Availability/Load Balanced Clusters

    A superb explanation by Theo Schlossnagle of how to deploy a high availability load balanced system using mod backhand and Wackamole. The idea is you don't need to buy expensive redundant hardware load balancers, you can make use of the hosts you already have to the same effect. The discussion of using peer-based HA solutions versus a single front-end HA device is well worth the read. Another interesting perspective in the document is to view load balancing as a resource allocation problem. There's also a nice discussion of the negative of effect of keep-alives on performance.

    Click to read more ...


    Lessons from Pownce - The Early Years

    Pownce is a new social messaging application competing micromessage to micromessage with the likes of Twitter and Jaiku. Still in closed beta, Pownce has generously shared some of what they've learned so far. Like going to a barrel tasting of a young wine and then tasting the same wine after some aging, I think what will be really interesting is to follow Pownce and compare the Pownce of today with the Pownce of tomorrow, after a few years spent in the barrel. What lessons lie in wait for Pownce as they grow? Site:

    Information Sources

  • Pownce Lessons Learned - FOWA 2007
  • Scoble on Twitter vs Pownce
  • Founder Leah Culver's Blog

    The Platform

  • Python
  • Django for the website framework
  • Amazon's S3 for file storage.
  • Adobe AIR (Adobe Integrated Runtime) for desktop application
  • Memcached
  • Available on Facebook
  • Timeplot for charts and graphs.

    The Stats

  • Developed in 4 months and went to an invite-only launch in June.
  • Began as Leah's hobby project and then it snowballed into a real horse with the addition of Digg's Daniel Burka and Kevin Rose.
  • Small 4 person team with one website developer.
  • Self funded.
  • One MySQL database.
  • Features include: - Short messaging, invites for events, links, file sharing (you can attach mp3s to messages, for example). - You can limit usage to a specific subset of friends and friends can be grouped in sets. So you can send your mp3 to a specific group of friends. - It does not have an SMS gateway, IM gateway, or an API.

    The Architecture

  • Chose Django because it had an active community, good documentation, good readability, it is open to growth, and auto generated administration.
  • Chose S3 because it minimized maintenance and was inexpensive. It has been reliable for them.
  • Chose AIR because it had a lot of good buzz, ease of development, creates a nice UI, and is cross platform.
  • Database has been the main bottleneck. Attack and fix slow queries.
  • Static pages, objects, and lists are cached using memcached.
  • Queuing is used to defer more complex work, like sending notes, until later.
  • Use pagination and a good UI to limit the amount of work performed.
  • Good indexing helped improve the performance for friend searching.
  • In a social site: - Make it easy to create and destroy relationships. - Friend relationships are the most important information to display correctly because people really care about it. - Friends in the online world have real-world effects.
  • Features are "biased" for scalability - You must get an invite from someone on already on Pownce. - Invites are limited to their data center's ability to keep up with the added load. Blindly uploading address books can bring on new users exponentially. Limiting that unnatural growth is a good idea.
  • Their feature set will expand but they aren't ready to commit to an API yet.
  • Revenue model: ads between posts.

    Lessons Learned

  • The four big lessons they've experienced so far are: - Think about technology choices. - Do a lot with a little. - Be kind to your database. - Expect anything.
  • Have a small dedicated team where people handle multiple jobs.
  • Use open source. There's lots of it, it's free, and there's a lot of good help.
  • Use your resources. Learn from website doc, use IRC, network, participate in communities and knowledge exchange.
  • Shed work off the database by making sure that complex features are really needed before implementing them.
  • Cultivate a prepared mind. Expect the unexpected and respond quickly to the inevitable problems.
  • Use version control and make backups.
  • Maintain a lot of performance related stats.
  • Don't promise users a deadline because you just might not make it.
  • Commune with your community. I especially like this one and I wish it was done more often. I hope this attitude can survive growth. - Let them know what you are working on and about new features and bug fixes. - Respond personally to individual bug creators.
  • Take a look at your framework's automatically generated queries. They might suck.
  • A sexy UI and a good buzz marketing campaign can get you a lot of users.

    Related Articles

  • Scaling Twitter: Making Twitter 10000 Percent Faster.

    Click to read more ...

  • Sunday

    Paper: Architecture of a Highly Scalable NIO-Based Server

    The article describes the basic architecture of a connection-oriented NIO-based java server. It takes a look at a preferred threading model, Java Non-blocking I/O and discusses the basic components of such a server.

    Click to read more ...


    Using ThreadLocal to pass context information around in web applications

    Hi, In java web servers, each http request is handled by a thread in thread pool. So for a Servlet handling the request, a thread is assigned. It is tempting (and very convinient) to keep context information in the threadlocal variable. I recently had a requirement where we need to assign logged in user id and timestamp to request sent to web services. Because we already had the code in place, it was extremely difficult to change the method signatures to pass user id everywhere. The solution I thought is class ReferenceIdGenerator { public static setReferenceId(String login) { threadLocal.set(login + System.currentMillis()); } public static String getReferenceId() { return threadLocal.get(); } private static ThreadLocal threadLocal = new ThreadLocal(); } class MySevlet { void service(.....) { HttpSession session = request.getSession(false); String userId = session.get("userId"); ReferenceIdGenerator.setRefernceId(userId); try { doSomething(); } finally { ReferenceIdGenerator.remove(); } } This method is also discussed at Is this a reasonable approach to pass context information in web application? Can this ever happen that while a http request is being processed in the thread, a thread is suddenly assigned to some other tasks? I hope this can never happen, because app servers themselves rely heavily on threadlocals to keep transaction related information around. What do you think? Thanks, Unmesh

    Click to read more ...


    Product: Wackamole

    Wackamole is an application that helps with making a cluster highly available. It manages a bunch of virtual IPs, that should be available to the outside world at all times. Wackamole ensures that a single machine within a cluster is listening on each virtual IP address that Wackamole manages. If it discovers that particular machines within the cluster are not alive, it will almost immediately ensure that other machines acquire these public IPs. At no time will more than one machine listen on any virtual IP. Wackamole also works toward achieving a balanced distribution of number IPs on the machine within the cluster it manages. There is no other software like Wackamole. Wackamole is quite unique in that it operates in a completely peer-to-peer mode within the cluster. Other products that provide the same high-availability guarantees use a "VIP" method. Wackamole is an application that runs as root in a cluster to make it highly available. It uses the membership notifications provided by the Spread toolkit to generate a consistent state that is agreed upon among all of the connected Wackamole instances. Wackamole is released under the CNDS Open Source License. Note: This post has been adapted from the linked to web site.

    Related Articles

  • White paper on building HA/LB Clusters by Theo Schlossnagle.

    Click to read more ...