advertise
Tuesday
May272008

eBay Architecture

Update 2: EBay's Randy Shoup spills the secrets of how to service hundreds of millions of users and over two billion page views a day in Scalability Best Practices: Lessons from eBay on InfoQ. The practices: Partition by Function, Split Horizontally, Avoid Distributed Transactions, Decouple Functions Asynchronously, Move Processing To Asynchronous Flows, Virtualize At All Levels, Cache Appropriately. Update: eBay Serves 5 Billion API Calls Each Month. Aren't we seeing more and more traffic driven by mashups composed on top of open APIs? APIs are no longer a bolt on, they are your application. Architecturally that argues for implementing your own application around the same APIs developers and users employ. Who hasn't wondered how eBay does their business? As one of the largest most loaded websites in the world, it can't be easy. And the subtitle of the presentation hints at how creating such a monster system requires true engineering: Striking a balance between site stability, feature velocity, performance, and cost. You may not be able to emulate how eBay scales their system, but the issues and possible solutions are worth learning from. Site: http://ebay.com

Information Sources

  • The eBay Architecture - Striking a balance between site stability, feature velocity, performance, and cost.
  • Podcast: eBay’s Transactions on a Massive Scale
  • Dan Pritchett on Architecture at eBay interview by InfoQ

    Platform

  • Java
  • Oracle
  • WebSphere, servlets
  • Horizontal Scaling
  • Sharding
  • Mix of Windows and Unix

    What's Inside?

    This information was adapted from Johannes Ernst's Blog

    The Stats

  • On an average day, it runs through 26 billion SQL queries and keeps tabs on 100 million items available for purchase.
  • 212 million registered users, 1 billion photos
  • 1 billion page views a day, 105 million listings, 2 petabytes of data, 3 billion API calls a month
  • Something like a factor of 35 in page views, e-mails sent, bandwidth from June 1999 to Q3/2006.
  • 99.94% availability, measured as "all parts of site functional to everybody" vs. at least one part of a site not functional to some users somewhere
  • The database is virtualized and spans 600 production instances residing in more than 100 server clusters.
  • 15,000 application servers, all J2EE. About 100 groups of functionality aka "apps". Notion of a "pool": "all the machines that deal with selling"...

    The Architecture

  • Everything is planned with the question "what if load increases by 10x". Scaling only horizontal, not vertical: many parallel boxes.
  • Architectures is strictly divided into layers: data tier, application tier, search, operations,
  • Leverages MSXML framework for presentation layer (even in Java)
  • Oracle databases, WebSphere Java (still 1.3.1)
  • Split databases by primary access path, modulo on a key.
  • Every database has at least 3 on-line databases. Distributed over 8 data centers
  • Some database copies run 15 min behind, 4 hours behind
  • Databases are segmented by function: user, item account, feedback, transaction, over 70 in all.
  • No stored procedures are used. There are some very simple triggers.
  • Move cpu-intensive work moved out of the database layer to applications applications layer: referential integrity, joins, sorting done in the application layer! Reasoning: app servers are cheap, databases are the bottleneck.
  • No client-side transactions. no distributed transactions
  • J2EE: use servlets, JDBC, connection pools (with rewrite). Not much else.
  • No state information in application tier. Transient state maintained in cookie or scratch database.
  • App servers do not talk to each other -- strict layering of architecture
  • Search, in 2002: 9 hours to update the index running on largest Sun box available -- not keeping up.
  • Average item on site changes its search data 5 times before it is sold (e.g. price), so real-time search results are extremely important.
  • "Voyager": real-time feeder infrastructure built by eBay.. Uses reliable multicast from primary database to search nodes, in-memory search index, horizontal segmentation, N slices, load-balances over M instances, cache queries.

    Lessons Learned

  • Scale Out, Not Up – Horizontal scaling at every tier. – Functional decomposition.
  • Prefer Asynchronous Integration – Minimize availability coupling. – Improve scaling options.
  • Virtualize Components – Reduce physical dependencies. – Improve deployment flexibility.
  • Design for Failure – Automated failure detection and notification. – “Limp mode” operation of business features.
  • Move work out of the database into the applications because the database is the bottleneck. Ebay does this in the extreme. We see it in other architecture using caching and the file system, but eBay even does a lot of traditional database operations in applications (like joins).
  • Use what you like and toss what you don't need. Ebay didn't feel compelled to use full blown J2EE stack. They liked Java and Servlets so that's all they used. You don't have to buy into any framework completely. Just use what works for you.
  • Don't be afraid to build solutions that meet and evolve with your needs. Every off the shelf solution will fail you at some point. You have to go the rest of the way on your own.
  • Operational controls become a larger and larger part of scalability as you grow. How do you upgrade, configure, and monitor thousands of machines will running a live system?
  • Architectures evolve. You need to be able to change, refine, and develop your new system while keeping your existing site running. That's the primary challenge of any growing website.
  • It's a mistake to worry too much about scalability from the start. Don't suffer from paralysis by analysis and worrying about traffic that may never come.
  • It's also a mistake not to worry about scalability at all. You need to develop an organization capable of dealing with architecture evolution. Understand you are never done. Your system will always evolve and change. Build those expectations and capabilities into your business from the start. Don't let people and organizations be why your site fails. Many people will think the system should be perfect from the start. It doesn't work that way. A good system is developed overtime in response to real issues and concerns. Expect change and adapt to change.

    Click to read more ...

  • Tuesday
    May272008

    Secure Remote Administration for Large-Scale Networks

    This website has been a great resource for helping me to understand the successful (and failed) scalable network designs from organizations that have actually done it, but I haven't seen any explicite explanations about secure remote administration of these systems. I understand that the *nix people love to SSH, and the windows gang has their RDP, but how does one go about creating a network architecture that both allows one to manage their systems and does its best to avoid hacker interest? As I imagine, no big website will have the SSH/RDP/FTP ports open on the web server, so how is it that they go about remotely administering their geographically diverse groups of servers securely?

    Click to read more ...

    Tuesday
    May272008

    Should Twitter be an All-You-Can-Eat Buffet or a Vending Machine?

    Om proposes one solution to the Twitter Problem is to limit followers to three square meals a day. The reasonable idea being that lower limits should mean fewer scaling problems. And as a kicker raising those limits is a good way to raise much needed revenue. Scoble thinks users should consume without limit and will drive to another buffet if all-you-can-eat privileges are revoked. The reasonable idea being that if an internet service can't solve internet scale problems then there's not much use for it. Dave says comp power users a top floor suite and shower them with free passes to the buffet. Let the good times roll! The reasonable idea being that power users help create popular restaurants, er, services in the first place and limiting them starves users and starved users won't come back. So, should web services like Twitter be a buffet, a fixed eight course fine dining experience, a small plate restaurant, a family style joint, or a vending machine? Or something else entirely? In a distant barely remembered past I actually worked at an all-you-can-eat buffet. The food was very good and most customers didn't over over indulge. If they did the place wouldn't stay in business long. But some customers did. They were called stackers. Stackers were so named because a large stack of plates would pile up on their table throughout the meal. Stackers followed a power law distribution. Few customers at any one time were stackers, but their effect could be devastating. How devastating depended on their favorite foods... A stacker who loved potato salad was manageable. We had plenty of potato salad and it was cheap and quick to make. No problem. Stacking itself was not frowned upon and never discouraged. It's an all-you-can-eat buffet after all! But if a stacker's favorite food was roast beef, that was trouble. Not only is roast beef expensive, it comes in a limited supply because it has to be prepared ahead of time. Once you ran out there was no more roast beef for the rest of the night. Good roast beef takes hours to prepare, it must be planned for. Management's job was to carefully balance projected demand against waste. The goal was to prepare enough meat to meet demand, yet not have a lot of left-overs. Stackers blow apart the finely balanced calculation of how much roast beef to make and the carving station is left trying to push the ham while apologizing for an embarrassing lack of roast beef. An ugly ugly scene. As a carver you are armed with a long scary looking knife and you are shielded by Medieval chain-mail looking glove, but hungry customers are mean and fast. You never see it coming. Unfortunately the distribution of stackers on any given night is unpredictable. You can't always cook a maximum amount of meat or you'll go broke. And if you make too little everyone is unhappy. It needs to be just right. As a person with serious stacker tendencies I try to remember the cost of things and keep a reasonable balance. The only way to make Goldilocks happy and have just the right balance is to place limits. Eventually the restaurant had to limit the number of trips to the roast beef station to three a meal. Enough that you get value for your dollar, but not so much that the restaurant goes under. Everyone happy? Of course not. The world doesn't work like that. It's all-you-can-eat some would say so I should be able to eat all I can eat ! But there are always limits. Would it be fair to back a truck up to the restaurant and start loading up because that's part of your meal? No. Is it fair to stuff your backpack with food on the way out? No. So there are always limits. The question is what are fair limits? It has been said FriendFeed has no problems handling 10,000 friends so neither should Twitter. Now, let's imagine if I spun up 1000 EC2 servers whose only task was to add more friends to feed. Would FriendFeed limit me then? Of course. It's basic web site self-defense, a right guaranteed under the constitution and long recognized by the courts in certain situations. But still, what are fair limits? How much roast beef should you be able to eat? Limit setting is a strategy we've talked about many times as a way of protecting sites from complete devastation. My favorite example is Mailinator whose prime directive is surviving attacks and they've deployed many clever practices in their own defense. And most every large web site on earth is busy watching your every move so they can bounce you at the first sign of DDOS Armageddon. Limits aren't inherently bad. But limits don't make you scale, they simply stop you from unscaling. An adequate scalable infrastructure must still be put in place. In the end I agree with Scoble in that the power of the internet is having interesting conversations with interesting people about interesting topics. For interesting conversations to happen you must be able to freely create relationships. If you or they have to pay for relationships they simply won't form. Would Google's Page Rank algorithm work so well if it could only analyze paid relationships? A web formed under a paid relationship model would look totally different and be decidedly less valuable. Similarly, a social network that can't grow naturally through preferential attachment would have much less value. Scaling relationships is a core social network competency. Relationships should be subject to DDOS type limits, but not limits artificially out of proportion with a user's internet audience. I doubt Twitter would disagree, but they are going through a tough time right now. I also agree with Om. The Freemium model is a great idea and linking that to site protecting prophylactics is even better. But limiting a core competency may not be the right target. Fotolog is an example of a service that puts Freemium ideas to good use. They charge extra for adding more photos a day, more comments a day, custom profile abilities, and social status add ons. What is the equivalent in Twitter? I don't know, but I would try to treat relationships more like potato salad than roast beef. And I also agree with Dave. It's hard to get noticed on the web. Those who help you storm the attention barrier shouldn't be punished. They should be rewarded with a tasty appropriately sized meal.

    Click to read more ...

    Sunday
    May252008

    Product: Condor - Compute Intensive Workload Management

    From their website: Condor is a specialized workload management system for compute-intensive jobs. Like other full-featured batch systems, Condor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit their serial or parallel jobs to Condor, Condor places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion. While providing functionality similar to that of a more traditional batch queueing system, Condor's novel architecture allows it to succeed in areas where traditional scheduling systems fail. Condor can be used to manage a cluster of dedicated compute nodes (such as a "Beowulf" cluster). In addition, unique mechanisms enable Condor to effectively harness wasted CPU power from otherwise idle desktop workstations. For instance, Condor can be configured to only use desktop machines where the keyboard and mouse are idle. Should Condor detect that a machine is no longer available (such as a key press detected), in many circumstances Condor is able to transparently produce a checkpoint and migrate a job to a different machine which would otherwise be idle. Condor does not require a shared file system across machines - if no shared file system is available, Condor can transfer the job's data files on behalf of the user, or Condor may be able to transparently redirect all the job's I/O requests back to the submit machine. As a result, Condor can be used to seamlessly combine all of an organization's computational power into one resource.

    Related Articles

  • High Throughput Computing by Miron Livny
  • Condor Presentations
  • (my) Principles of Distributed Computing by Miron Livny

    Click to read more ...

  • Sunday
    May252008

    How do you explain cloud computing to your grandma?

    Update 2: Nice introductory New York Time's article Cloud Computing: So You Don’t Have to Stand Still. Good example of how Animoto used RightScale and Amazon to meet a Facebook driven demand of 25,000 test drives an hour. Update: Peter Laird in Understanding the Cloud Computing/SaaS/PaaS markets: a Map of the Players in the Industry paints a very cool visual map of all the cloud service players. It's a larger industry than you might think. Once upon a time I worked at an Asynchronous Transfer Mode (ATM) switch startup. Over a delicious Christmas punch my grandma asked me what I did for a living that I could afford such extravagantly inexpensive gifts. Always so subtle. I explained I worked on an ATM switch. Mistake. She sniffed, said that's nice, and asked me why the Automated Teller Machine ate her bank card that morning. No matter how hard I tried I couldn't convince her I didn't work on bank ATMs. To all future job interrogations I waxed off, protesting I do boring software stuff that nobody cares about. Not put off in the least, grandma asked me last night to explain this cloud computing thing she keeps hearing about at her church club. Afraid of being another victim of the distortion field surrounding cloud computing, I instead referred her to Kent Langley's excellent overview of the subject in Cloud Computing: Get Your Head in the Clouds. It does a good job demystifying the very confusing concept of cloud computing. It has nice diagrams, definitions, examples and is a great place to start. She agreed that she had learned a lot, but one thing still troubled her: what's the difference between cloud computing and utility computing? They seem to be the same to her. Always so perceptive. She felt sure if she could drive this point home she would score big points with her church group. Oh the pressure. I steadied myself and explained 3Tera’s take is that cloud computing is for service users and utility computing is for service builders. Cloud computing is essentially about the surrender of control. Users of a service like Salesforce.com don’t care how the site is implemented. They don’t care about how it scales, deals with failure, or any of the other 1000s of little details you have to care about when running a complicated operation. Users just want their service to work when they need it. Utility computing customers on the other hand require fine control over their resources because they are the builders of services like Salesforce.com. Cloud computing is built on utility computing. You couldn’t build a Salesforce.com on Google whereas you could build it on top of 3Tera or Amazon. StorageMojo thinks all this cloud/utility nonsense is just foggy thinking. Real computing will stay local because the cost of network access is too high. Memory and CPU are plentiful and cheap while bandwidth is neither. Distributed computing 1990s style will still rule the day. Mike Nygard thinks there’s A Cloud for Everyone in the future. Latency matters and “Keeping your endpoints on your own network at least lets you control your own latency.” Security matters and pushing your precious data into the hands of strangers isn’t secure. Yet we see SalesForce, Google Docs, Basecamp, SugarCRM, and hosted email all flourishing so is privacy really a concern for newer generations trying to get stuff done? HP’s Patrick Eitenbichler thinks “utility computing refers to a business model, while cloud computing describes the underlying IT architecture” with the real decision point being “utility/cloud computing vs. purchasing your own IT assets.” Geva Perry writing for GigaOM essentially agrees with Mr. Eitenbichler saying: Utility computing relates to the business model in which application infrastructure resources — hardware and/or software — are delivered. While cloud computing relates to the way we design, build, deploy and run applications that operate in a virtualized environment, sharing resources and boasting the ability to dynamically grow, shrink and self-heal. Krish tries to condense that down to: cloud computing is software as a service (where companies run their own software) and utility computing is hardware as a service (where you can run your own software). Margaret Rouse makes a good case for cloud computing being just a better marketing concept for utility/grid/cluster/distributed/parallel computing. Bits or Pieces smartly ignores saying the word cloud but my impression is they think providing Software as a Service on a utility computing basis is the game changing innovation. James Urquhart defines the cloud to include: SaaS, PaaS (e.g. force.com) and HaaS (e.g. Amazon, Mosso, etc.). SaaS is in clearly in play today, HaaS is being experimented with, but PaaS may be the most interesting facet of the cloud in the long term. Keystones and Rivets finds that “The Cloud” is grid computing wrapped up in a service offered by data centers. Confident I must have answered her original question, I asked “Now, doesn’t that clear things up grandma?” Grandma sniffed, said that's all very nice, but she still wanted to know why the ATM ate her bank card! I groaned and said “Goodnight grandma. I’ll call again next week.” “Excellent,“ she Cheshire smiled, “next week my church group is going to tackle if social networks are really monitizeable.”

    Click to read more ...

    Monday
    May192008

    UK Based CDN

    Hi, I was wondering if I could borrow the collective minds of you all to draw up a list to the CDN's that you'd use/do use in the UK. If they're outside the UK but have decent support then also include. The service must be cheap and not require a huge setup fee, it's really only for a small time business; it shares video & high-res pics so mass cheap storage is a must and wondered whether you guys had any ideas, also costs? Mass storage isn't cheap in the UK compared to the states, for example, unless I go colo but as I say, it's a small setup but happens to require a fair bit of space. Would S3 be a good starting point? What is the service like? I hear mixed reviews about it. Many thanks, Jim

    Click to read more ...

    Monday
    May192008

    Conference: Infoscale 2008 in Italy (June 4-6)

    The Third International Conference on Scalable Information Systems will focus on a wide array of scalability issues and investigate new approaches to tackle problems arising from the ever-growing size and complexity of information of all kinds. Looking at their technical program a lot of interesting topics will be covered. I see sensor networks, a subject I'm really interested in, has a number of sessions. That's unusual. And it's in Italy!

    Click to read more ...

    Monday
    May192008

    Twitter as a scalability case study

    A lot has been said already about Twitter's scalability issues. Many have given Twitter as an anti-pattern of how not to deal with scalability and have suggested different solutions for scaling it. As Twitter is famously a Ruby-on-Rails deployment, this case has also been used as a weapon in the language/platform wars between the RoR and Java camps, and to a lesser degree, also with the LAMP (PHP) camp

    Click to read more ...

    Saturday
    May172008

    DB2 Express-C

    Searching around the HS website I noticed that there are no articles regarding db2, which has an express edition, free of charge and from what I know there aren't any restrictions. Being a powerful database system I thought it could make be an alternative to MySQL, PostgreSQL databases. Here is the IBM statement: "DB2 Express Edition for Community (DB2 Express-C) is a no charge data server for use in development and deployment. DB2 Express-C supports a full range of APIs, drivers, and interfaces for application development including PHP, C/C++, and .NET. In addition, DB2 Express-C V9 contains advanced XML features. DB2 Express-C provides ISVs an ideal starting database server for Web, enterprise, and eBusiness applications. This IBM Redbook provides fundamentals of DB2 application development with DB2 Express-C. It covers the DB2 Express-C installation and configuration for application development and skills and techniques for building DB2 applications with XML, PHP, C/C++, Java, and .NET. Code examples are used to demonstrate how to develop a DB2 application in a different language. By following the examples provided, you will be able to learn DB2 application development with XML, PHP, C/C++, Java, and .NET in a short time." Download the redbook about db2 express-c.

    Click to read more ...

    Saturday
    May172008

    WebSphere Commerce High Availability and Performance Configurations

    Nobody came up with an example of a website powered by a Websphere product (which has a community edition) and backed up by a DB2 database. I guess you all know about usopen.org so here's the story: While the re-emergence of 35-year-old Andre Agassi and the continued dominance of wunderkind Maria Sharapova have highlighted the on-court headlines at this year's U.S. Open Tennis Championships in Flushing Meadows, N.Y., IBM is hoping its new Power5 chip-based IT support for USOpen.org can make news among those more interested in .NET than tennis nets. Big Blue has partnered with the U.S. Tennis Association and the U.S. Open -- the most prestigious tennis tournament in the U.S. -- since 1992. Together, they launched USOpen.org in 1995 so racket heads could follow the matches online. The iSeries' role this year is in powering a Web-based end-user application called "Point Tracker," a graphics tool using autonomic technology that recreates the trajectory of every shot. On-court cameras capture and record ball position data for every forehand, ace and volley. Once that data is integrated with the scoring data, the shot data is pushed to the Web site to enable visitors to follow the action online. IBM is running the Web site on an eServer pSeries system, a Power5-based server. Two pSeries systems, models p550 -- released two weeks ago -- and p570, replaced Web and application servers to help automate the infrastructure that supports the Web site. The 2005 U.S. Open Web site traffic will be managed by Big Blue from a "virtualized" server environment at one of the three hosting locations. According to IBM, the pSeries systems allow IBM to consolidate several servers onto two larger boxes. The pSeries p5 systems handle USOpen.org workloads from Web serving to fan polling, feedback and player search applications, which are managed from each pSeries p5 server as a virtualized environment using Power-based virtualization technologies such as Micro-Partitioning, Virtual I/O Server and Partition Load Manager, which consolidate AIX 5L and multiple Linux operating environments onto a single system. Approximately 2.8 million fans visited USOpen.org during the two-week tournament in 2004. More information on this technologies can be found here: Quote from IBM redbooks: Building a high performance and high availability commerce site is not a trivial task -- from having right capacity hardware to handle the workload to properly testing the code change before deploying in production site. This redbook covers several major areas that need to be considered when using WebSphere Commerce Server and provide solution on how to address them. Here are some of the topics: 1. How to build a Commerce site to deal with various kind of unplanned outage? Topic including utilizing WebSphere Application Server Network Deployment 6.0 and IBM DB2 High Availability disaster Recovery (HADR) in Commerce environment. 2. How to build a Commerce site to deal with planned outages such as software fix and operation update? Topic including uses of WebSphere Application Server's Rolling update feature and uses of Commerce's Staging Server and Content Management. 3. How to proactively monitoring the commerce site prevent potential problem happening? Various Tools should be discussed including various WebSphere Application Server build-in tools and Tivoli's Composite Application Management. 4. How to utilized dynacache to future enhance your Commerce Site's performance? Topics includes additional Commerce command caching introduce in Commerce Fix pack and e-spot caching. 5. What's the methodology of doing performance and scalability testing on Commerce site? Tools that may be covered included Tivoli Performance Tester 6. Techniques on migrate a high volume Commerce site to newer Commerce release. " end Quote Maybe some of us can find this useful, Websphere Community Edition is a free Java™ EE 5 server for building and managing Java™ applications. Download this Redbook

    Click to read more ...