advertise
Saturday
Oct202007

Strategy: Send XHR Request on Lost Focus Instead of For Every Character

Robert Stewart shared this useful Ajax related scalability strategy: We avoided XMLHttpRequests for individual keystrokes, choosing to go back to the server only when a field lost focus. Google can afford all the servers to handle the load for that, but we didn't want to. Do you have a scalability strategy to share? Then share it!.

Click to read more ...

Thursday
Oct182007

another approach to replication

File replication based on erasure codes can reduce total replicas size 2 times and more.

Click to read more ...

Tuesday
Oct162007

How Scalable are Single Page Ajax Apps?

I've been using GWT for an application and I get the same feeling using it that I first got using html. I've always sucked at building UIs. Starting with programming HP terminals, moving on to the Apple Lisa, then X Windows, and Microsoft Windows, I just never had IT, whatever IT is. On the Beauty and the Geek scale my interfaces are definitely horned-rimmed and pocket protector friendly. Html helped free me from all that to just build stuff that worked, but didn't have to look all that great. Expectations were pretty low and I eagerly fulfilled them. With Ajax expectations have risen again and I find myself once more easily identifiable as a styless geek. Using GWT I have some hopes I can suck a little less. In working with GWT I was so focussed on its tasty easily digestible Ajaxy goodness, I didn't stop to think about the topic of this site: scalability. When I finally brought my distracted mind around to consider the scalability of the single page webs site I was building, I became a bit concerned. Many of the strategies that are typically used to achieve scalability don't seem to apply in single page land. Here are the issues I see. Maybe you can tell me where I am off in my analysis?

  • Plus: a lot of state is maintained in the client. You don't need to keep session state on the server side. This is a win because you aren't slamming the database to reconstitute state. It's cached on the client. After more consideration it seems this is not always the case. Take your typical shopping cart scenario. You have the old problem of not storing prices in the client so some evil Mallory can attack your system by changing prices. And my shopping cart must outlast my browser session so its still there when I return. I would be heart broken if my carefully crafted Amazon cart disappeared every time Firefox went away. So server side state is often still necessary. Yet a lot of state is kept on the client side and that's a better thing.
  • Plus: a lot of business logic in on the client. The client can do a lot of the work which saves making calls to the server. An interesting comparison of the effects of Ajax on business logic partitioning is Google Calendar: Not As Fat as Other Ajax Apps by Dietrich Kappe.
  • Minus: Can't offload searching. The lack of a proper link structure means your site can't be spidered, which means it can't be searched. One useful scalability strategy is to offload search to something like Google's Custom Search Engine, not for the ad revenue (because there's little), but because it means I don't have to devote any resources to searching. That's a huge win.
  • Minus: SEO problems suck up developer time. The common response to the previously mentioned search engine optimization (SEO) problems are to make a shadow text site or insert hidden divs. But that's a lot of pretty useless effort. I would like to spend my time elsewhere.
  • Minus: Can't load balance static content from the client. RPC is used to slurp up data from the server and these requests must go back to the originating domain. This counters one common strategy of using a CDN and/or multiple host names for serving content so you can trick your browser into starting multiple simultaneous connections to different hosts when loading page content. This speeds up your site and spreads the load across different servers. Using RPC to serve content seems to lose this advantage.
  • Minus: Ajax calls add server load. You buy into that with Ajax, but it's still a concern, especially if you have to poll frequently for updates. Dietrich found that the Ajax requests may not be that much smaller than before, so you can't depend on smaller work loads to make up for the increased number of calls. See Yahoo Mail, Ajax and Your Server.
  • Minus: Lack of monetary scalability with AdSense. Without a page to parse AdSense can't figure out which ads to display on your site. So one common monetization strategy isn't open to you.
  • Unsure: When using a caching proxy like Squid, a major scalability strategy, is my cacheable content effectively cached when using RPC? I couldn't find a resolution to this issue. One solution around many of these problems is to use a combination of REST and JASONP. This converts your client into a big mashup, even if all the parts you are mashing are your own. And this approach makes a lot of sense to me, but then I don't really see the purpose of having a RPC mechanism. There are surely issues I've missed and misunderstood, but it seems single page apps present some distinct scalability challenges. Your thoughts would be appreciated.

    Click to read more ...

  • Monday
    Oct152007

    Olympic Site Architecture

    Hello,everybody,I'm plant to building a new website like 2008.sina.com.cn 2008.sohu.com .site contents have pic news,text news,and video news.user blog ....now I have a question to ask everybody,I hope can get usefully information to here. status: 100,000,000 people /per day 50,000 people /peak time more than 200 servers OpenBSD/Opensuse Apache Fast CGI modules lighttpd for picture Mysql varnish LVS lucene search do you have a good idea to it?thans for everybody!

    Click to read more ...

    Sunday
    Oct142007

    Newbie in scalability design issues

    I have 3 exp in building website using java.I work on only single server.Website is not very scalable.I always wonder how ebay,youtube,monster handle traffic, giving responses within seconds.From the google i find this site and i hope i can also able to build very scalable website .I need guidelines from where to start ,what are the things we needed.I know that scalability comes through the use of distributed applications but don't how to implement it. I see many website build in languages other than java so java is good choice for building high scalable website. Thanks

    Click to read more ...

    Sunday
    Oct142007

    Product: The Spread Toolkit

    Complex applications coordinating work across a lot of machines often need a highly performing fault tolerant message layer. Though a blast to write, it's probably a better use of your time to use an off the shelf solution. And that's where Spread comes in. Flickr, for example, uses Spread to create real-time event feeds from their web server logs. What exactly is Spread? From the Spread website:

    Spread is an open source toolkit that provides a high performance messaging service that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees. Spread can be used in many distributed applications that require high reliability, high performance, and robust communication among various subsets of members. The toolkit is designed to encapsulate the challenging aspects of asynchronous networks and enable the construction of reliable and scalable distributed applications. Some of the services and benefits provided by Spread:
  • Reliable and scalable messaging and group communication.
  • A very powerful but simple API simplifies the construction of distributed architectures.
  • Easy to use, deploy and maintain.
  • Highly scalable from one local area network to complex wide area networks.
  • Supports thousands of groups with different sets of members.
  • Enables message reliability in the presence of machine failures, process crashes and recoveries, and network partitions and merges.
  • Provides a range of reliability, ordering and stability guarantees for messages.
  • Emphasis on robustness and high performance.
  • Completely distributed algorithms with no central point of failure.
  • In Building Scalable Web Sites Cal Henderson describes how Flickr uses Spread to create a log of real-time events, like photos uploaded and discussions started, as they happen. Spread is connected to their web servers. As photos are uploaded these web server events are messaged in real-time to agents consuming the feed. The advantage of this architecture is it sheds load away from the database. Otherwise the database would have to be continuously polled for new events by each agent.

    Related Articles

  • LAMP and the Spread Toolkit
  • The Spread Toolkit: Architecture and Performance

    Click to read more ...

  • Thursday
    Oct112007

    How Flickr Handles Moving You to Another Shard

    Colin Charles has cool picture showing Flickr's message telling him they'll need about 15 minutes to move his 11,500 images to another shard. One, that's a lot of pictures! Two, it just goes to show you don't have to make this stuff complicated. Sure, it might be nice if their infrastructure could auto-balance shards with no down time and no loss of performance, but do you really need to go to all the extra complexity? The manual system works and though Colin would probably like his service to have been up, I am sure his day will still be a pleasant one.

    Click to read more ...

    Wednesday
    Oct102007

    WAN Accelerate Your Way to Lightening Fast Transfers Between Data Centers

    How do you keep in sync a crescendo of data between data centers over a slow WAN? That's the question Alberto posted a few weeks ago. Normally I'm not into all boy bands, but I was frustrated there wasn't a really good answer for his problem. It occurred to me later a WAN accelerator might help turn his slow WAN link into more of a LAN, so the overhead of copying files across the WAN wouldn't be so limiting. Many might not consider a WAN accelerator in this situation, but since my friend Damon Ennis works at the WAN accelerator vendor Silver Peak, I thought I would ask him if their product would help. Not surprisingly his answer is yes! Potentially a lot, depending on the nature of your data. Here's a no BS overview of their product:

  • What is it? - Scalable WAN Accelerator from Silver Peak (http://www.silver-peak.com)
  • What does it do? - You can send 5x-100x times more data across your expensive, low-bandwidth WAN link.
  • Why should you care? - Your data centers become more like co-located real-time peers. - You can sync a lot more media and other large files across data centers. 50x improvement in data replication performance over a WAN. - You may be able to operate on remote database more like a local database. 5x-20x improvement is SQL data manipulation and unique query performance. - A 2 hour database backup would take 4 minutes. 10x-30x improvement in transferring large data sets over SQL. A good disaster planning feature.
  • How does it work? - You buy an accelerator appliance for both sides of you link. All your WAN traffic flows through these boxes. - The appliances then use various techniques to effectively decrease latency and increase bandwidth across the link: -- Traffic reduction. Accelerators look for patterns in data across a link, caching the data on either side of the link, and then not sending the data when similar patterns are seen again. This can lead to a 90% reduction in traffic. -- Compression. Data are compressed across the link. compression ratios from 0 to 2-5x are seen, depending on the content type. -- TCP Manipulation. The TCP/IP protocol is gamed to yield better performance. For example, a proxy on both sides is used to get a bigger window size. -- Application Manipulation. Various application protocols, like CIFS, NFS, and Outlook, can be gamed to improve performance.
  • How much does it cost? - $10k to $130k per box. $10k for the 2Mbps appliance and $130k for the 500Mbps. - They are the scale leaders and are specifically good at "high-end" (> 50Mbps) replication.
  • Who uses it? - Fidelity Bank, Ernst & Young, Panasonic.
  • Is it for real? - Yes. It works and is installed and running in many data centers.
  • How do you get it? - Contact sales at http://www.silver-peak.com/Contact/contact.asp.
  • Where do you go for more information? - White paper Directory - http://www.silver-peak.com/InfoCenter/index.htm#whitepapers - Understanding WAN Acceleration Techniques - http://www.silver-peak.com/assets/download/pdf/technologydescriptions.pdf
  • Is there anything else interesting you should know? - The appliance performs encryption and compression so you don't need perform those functions on your own CPUs. - The appliances fail to wire so if a box fails traffic passes unaccelerated. If you can't live with that you need to buy 2 boxes per end of the link (4 boxes total).
  • How much will you benefit? - The more duplication in your data the better job they can do. There's tons of duplicated data in a database feed , for example, so they can really help supercharge database performance. - Latency/time improvements depend on the link. The higher the latency the link has the less bandwidth you can use. For example, a 100ms link is limited to 5Mbps throughput per flow due to the TCP window size (64KB/100ms ~ 5Mbps). They can take this to several hundred Mbps per flow. - Image files are often pre-compressed. As compression removes duplicate information they can't be as efficient at the de-duplication as in other scenarios, though they can still improve throughput. An interesting side-effect of speeding up the WAN link is that it often reveals bottlenecks in other parts of the system. A slow WAN might be hiding:
  • Underpowered servers. Servers that could process a trickle of data may be overwhelmed by a flood of data.
  • Slow applications. Apps that could pump data at slow WAN speeds may not be able drive a faster WAN. You may need to take a look at your software architecture or storage network.
  • Underpowered server links. Accelerate a 2mbps link to a 20mbps link and your network infrastructure on the data center side may not be able to handle the truth. Obviously the cost of the solution means its targeted more for moderate sized companies or a service provider offering their customers a quality upsell. But if you are stuck wondering how the heck you are going to squeeze more bits between your data centers, it may be just the magic bullet you need.

    Click to read more ...

  • Tuesday
    Oct092007

    High Load on production Webservers after Sourcecode sync

    Hi everybody :) We have a bunch of webservers (about 14 at this time) running Apache. As application framework we're using PHP with the APC Cache installed to improve performance. For load balancing we're using a Big F5 system with dynamic ratio (SNMP driven) To sync new/updated sourcecode we're using subversion to "automaticly" update these servers with our latest software relases. After updating the new source to these production servers the load of the mashines is raising to hell. While updating the servers, they are still in "production", serving webpages to the users. Otherwise the process of updating would take ages. Most of the time we're only updating in the morning hours while less users are online, because of the above issue. My guess is, that the load is raising that high, because APC needs to recompile a bunch of new files each time. Before and while compiling the performance simply is "bad". My goal would be to find a better solution. We want to "sync" code no matter how many users are online (in case of emergency) without taking the whole site down. How you're handling this issues ? What do you think about the process above ? Do you may find the "problem" ? Do you have similiar issues ? Feedback is highly welcome :) Greetings, Stephan Tijink Head of Web Development | fotocommunity GmbH & Co. KG | Rheinwerkallee 2 | 53227 Bonn

    Click to read more ...

    Monday
    Oct082007

    Paper: Understanding and Building High Availability/Load Balanced Clusters

    A superb explanation by Theo Schlossnagle of how to deploy a high availability load balanced system using mod backhand and Wackamole. The idea is you don't need to buy expensive redundant hardware load balancers, you can make use of the hosts you already have to the same effect. The discussion of using peer-based HA solutions versus a single front-end HA device is well worth the read. Another interesting perspective in the document is to view load balancing as a resource allocation problem. There's also a nice discussion of the negative of effect of keep-alives on performance.

    Click to read more ...