Job queue and search engine

Hi, I want to implement a search engine with lucene. To be scalable, I would like to execute search jobs asynchronously (with a job queuing system). But i don't know if it is a good design... Why ? Search results can be large ! (eg: 100+ pages with 25 documents per page) With asynchronous sytem, I need to store results for each search job. I can set a short expiration time (~5 min) for each search result, but it's still large. What do you think about it ? Which design would you use for that ? Thanks Mat

Click to read more ...


Webinar: Designing and Implementing Scalable Applications with Memcached and MySQL

The following technical Webinar could be of interest to the community. WHO:

  • Farhan "Frank" Mashraqi, Director of Business Operations and Technical Strategy, Fotolog Inc
  • Monty Taylor, Senior Consultant, Sun Microsystems
  • Jimmy Guerrero, Sr Product Marketing Manager, Sun Microsystems - Database Group
  • Designing and Implementing Scalable Applications with Memcached and MySQL web presentation.
  • Thursday, May 29, 2008, 10:00 am PST, 1:00 pm EST, 18:00 GMT
  • The presentation will be approximately 45 minutes long followed by Q&A.
Check out the details here!

Click to read more ...


Scalable virus scanning for web-applications

Hi, We're looking for a highly scalable way of scanning documents being uploaded and downloaded from our web application. I believe services like gmail and hotmail are using bespoke solutions from companies like Trend, but are there some quality "off the shelf" products out there that can easily be scaled out and have a "loose" API (HTTP based) for application integration? Once again, thanks for any input.

Click to read more ...


How I Learned to Stop Worrying and Love Using a Lot of Disk Space to Scale

Update 3: ReadWriteWeb says Google App Engine Announces New Pricing Plans, APIs, Open Access. Pricing is specified but I'm not sure what to make of it yet. An image manipulation library is added (thus the need to pay for more CPU :-) and memcached support has been added. Memcached will help resolve the can't write for every read problem that pops up when keeping counters. Update 2: threw a GAE load party and a lot of people came. The results at Load test : Google App Engine = 1, Community = 0. GAE handled a peak of 35 requests/second and a sustained 10 requests/second. Some think performance was good, others not so good. My GMT watch broke and I was late to arrive. Maybe next time. Also added a few new design rules from the post. Update: Added a few new rules gleaned from the GAE Meetup: Design By Explicit Cost Model and Puts are Precious. How do you structure your database using a distributed hash table like BigTable? The answer isn't what you might expect. If you were thinking of translating relational models directly to BigTable then think again. The best way to implement joins with BigTable is: don't. You--pause for dramatic effect--duplicate data instead of normalize it. *shudder* Flickr anticipated this design in their architecture when they chose to duplicate comments in both the commentor and the commentee user shards rather than create a separate comment relation. I don't know how that decision was made, but it must have gone against every fiber in their relational bones... But Flickr’s reasoning was genius. To scale you need to partition. User data must spread across the shards. So where do comments belong in a scalable architecture? From one world view comments logically belong to a relation binding comments and users together. But if your unit of scalability is the user shard there is no separate relation space. So you go against all your training and decide to duplicate the comments. Nerd heroism at its best. Let inductive rules derived from observation guide you rather than deductions from arbitrarily chosen first principles. Very Enlightenment era thinking. Voltaire would be proud. In a relational world duplication is removed in order to prevent update anomalies. Error prevention is the driving force in relational modeling. Normalization is a kind of ethical system for data. What happens, for example, if a comment changes? Both copies of the comment must be updated. That leads to errors because who can remember where all the data is stored? A severe ethical violation may happen. Go directly to relational jail :-) BigTable data ethics are more Mardi Gras than dinner with the in-laws. Data just wants to have fun. BigTable won’t stop you from hurting yourself. And to get the best results you may have to engage in some conventionally risky behaviors. But if those are the glass bead necklaces you have to give for a peak at scalability, why not take a walk on the wild side? For a more modern post-relational discussion of data ethics I’m using as my primary source a thread of conversations from JA Robson, Ben the Indefatigable, Michael Brunton-Spall, and especially Brett Morgan. According to our new Voltaire, Locke, Bacon, and Newton, here’s what it takes to act ethically in a BigTable world:
  • Don’t bother with BigTable unless your goal is to create a web site that scales to millions of users. The techniques for building scalable read-mostly web applications are difficult and require a radical mindset change. Standard relational techniques work very well until you scale to huge numbers of users. It is at that point you need to break the rules and do something counter-intuitively different. More of the same will not work. If you don’t plan to get to that point it may not be worth the effort to change. BigTable is targeted at building web applications, It's nature makes it a poor match for OLAP, data warehousing, data mining, and other applications performing complex data manipulations.
  • Assume slower random data access rather than fast sequential access. Every get of an entity could be from a different disk block on a different machine in a cluster. Calculating, for example, the average over a column in SQL can be efficient because data is stored together on disk. In BigTable data can be anywhere so iterating over every value in a column is expensive. Each read is potentially a random block from anywhere which means the average retrieval time can be relatively high. The implication is to use BigTable you must adopt some unfamiliar and unintuitive strategies in order to deal with such a very different performance profile. Using relational database we are used to writing applications against fast highly performant databases. With BigTable you have to become familiar with the rules for developing against a slower but more scalable database. Neither approach is better for all purposes, but BigTable has the edge for high scalability.
  • Group data for concurrent reads. Given the high cost of reading data from BigTable your application will not scale if every page requires a large number of reads. The solution: denormalize. Store data in the same entity based on what data needs to be read concurrently. Relational modeling groups data together based on the “minimize problems” rule. BigTable’s new rule is “maximize concurrent reads” which implies denormalization. Store entities so they can be read in one access rather than performing a join requiring multiple reads. Instead of storing attributes in separate entities in order to remove duplication, duplicate the attributes and store them where they need to be used. Following this rule minimizes the number of reads required to return an entity.
  • Disk and CPU are cheap so stop worrying about them and scale. A criticism of denormalization is storing duplicate data wastes disk space. Google’s architecture trades disk space for better performance. Disk is (relatively) cheap, so don’t fight it. On the CPU front a data center’s worth of CPU is at your service. As long as you structure your application in the way GAE forces you to, your application can scale as large as it needs to simply by running on more machines. All scalability bottlenecks have been removed.
  • Structure data around how it will be used. Trade SQL sets for application based entities. Queries are slow so the closer data is to the format it is to be used the faster pages will render. It’s like the database model becomes the model previously used at the caching layer. Complete entities tend to be cached, not low level detail rows. That’s what BigTable models should look like because that’s how concurrent reads are maximized. This isn’t the same as an object oriented database because the behavior is provided by applications, behavior is not bound to the entity so multiple applications can read the same entities yet implement very different behaviors.
  • Compute attributes at write time. Since looping over large columns of data is inefficient with BigTable the idea is to calculate values at write time instead of read time. For example, instead of calculating an average by reading an entire column at read time, track the total number and the total value at write time so the average can be calculated with one read on page display. Programmer effort is made up front at write time to minimize the work needed at read time. Preventing applications from iterating over huge data is key for making applications scale. Given the limitations of GAE transactions and quotas, GAE may not be appropriate for business applications that need exact summary statistics. Warning: if the summary stat is written on every read request then this approach will not scale as writes don't scale.
  • Create large entities with optional fields. Normalization creates lots of small entities. Instead, create larger entities with optional parts so you can do one read and then determine what’s present at run time. This shifts work from the database to the CPU while minimizing the number joins.
  • Define schemas in models. Denormalization requires user developed code to properly keep data consistent across multiple entities. The database won’t do it for you anymore. Schemas are really defined in code because it’s only code that can track all the relationships and maintain correctness. All database access must go through the models or otherwise the much feared inconsistency problems will result.
  • Hide updates using Ajax. Updates are slow so big bang updates of many entities will appear slow to users . Instead, use Ajax to update the database in little increments. As a user enters form data update the database so the update cost is amortized over many calls rather than one big call at the end. The result is a good user experience and a more scalable app.
  • Puts are Precious. Updating entities in large batches, say even 200 at a time, isn't part of the BigTable model. Entity attributes are automatically and synchronously indexed on writes. Indexing is an expensive operation that accumulates a lot of CPU time so the number updates that can be performed in one query is quite limited. The work around is to perform updates in smaller batches driven by an external CPU. Even when GAE provides the ability run batches within GAE the programming model for writes needs to be accounted for in a design.
  • Design By Explicit Cost Model. If you are going to be charged for an operation GAE wants you to explicitly ask for it. This is why some automatic navigation between objects isn't provided because that will force an explicit query to be written. Writing an explicit query is a sort of EULA for being charged. Click OK in the form of a query and you've indicated that you are prepared to pay for a database operation.
  • Place a many-to-many relation in the entity with the fewest number of elements. One way to create a many-to-many relationship is to have a list property that contains keys to the other related entities. A Company entity, for example, could contain a list of keys to Contact entities or a Contact entity could contain a list of keys to Company entities. Since it's likely a Contact is associated with fewer Companies the list should be contained in the Contact. The reasoning is maintaining large lists is relatively inefficient so you want to minimize the number of items in a list as much as possible.
  • Avoid unbounded queries. Large queries don't scale. Consider showing only the most recent 10 or so values from an attribute.
  • Avoid contention on datastore entities. If every request to your app reads or writes a particular entity, latency will increase as your traffic goes up because reads and writes on a given entity are sequential. One example construct you should avoid at all costs is the global counter, i.e. an entity that keeps track of a count and is updated or read on every request.
  • Avoid large entity groups. Any two entities that share a common ancestor belong to the same entity group. All writes to an entity group are sequential, so large entity groups can bog down popular apps quickly if there are a lot of writes to that group. Instead, use small, localized groups in your design.
  • Shard counters. Increment one of N counters and sum those N counters on the read side. This avoids the dreaded write bottleneck. See Efficient Global Counters by App Engine Fan for more details. An excellent example showing some of these principles in action can be found in this GQL thread. Take this nicely normalized schema:
     - Name 
     - Country 
    - Code 
    - Name 
    - Description 
    - Reference to Product Entity 
    - Reference to Customer Entity 
    - Date of order 
    Anyone from a relational background would look at this schema and give it a big thumbs up. With a little effort we can imagine the original physical purchase order that has now been normalized into three different tables. To recreate the original purchase order a join on purchases, produce and customer is needed. Read speed is not optimized, safety is optimized. Here’s what the same schema looks like optimized for reading:
    - Customer Name 
    - Customer Country 
    - Product Code 
    - Product Name 
    - Purchase Order Number 
    - Date Of Order
    The three original tables have been folded into one entity. Now a purchase order can be read in one get operation. No join necessary. Notice how the entity looks more like an original purchase order. It is also what would probably be cached and is what our model would probably look like. But what if you want to update a product name or a customer name? Those attributes are duplicated in all entities. Here’s where the protection offered by the relational model comes in. Only one entity needs updating in a normalized model. In BigTable you have to remember everywhere a customer name and product name and change every instance to new values. It’s not a simple, safe, or reliable approach. But it does optimize for read speed and scalability. For an application with a high proportion of updates to reads this approach wouldn’t make sense. But on the web reads usually dominate. How often do you really change a customer name or a product name? Seldom. How often do you read them? All the time. Designing to scale for reads and taking the pain on writes takes some getting used to. It’s a massive change to standard relational tactics. But this is what it takes to scale web applications, even if it feels a little strange at first.

    Related Articles

  • ER-Modeling with Google App Engine (updated)
  • Tips on writing scalable apps

    Click to read more ...

  • Tuesday

    eBay Architecture

    Update 2: EBay's Randy Shoup spills the secrets of how to service hundreds of millions of users and over two billion page views a day in Scalability Best Practices: Lessons from eBay on InfoQ. The practices: Partition by Function, Split Horizontally, Avoid Distributed Transactions, Decouple Functions Asynchronously, Move Processing To Asynchronous Flows, Virtualize At All Levels, Cache Appropriately. Update: eBay Serves 5 Billion API Calls Each Month. Aren't we seeing more and more traffic driven by mashups composed on top of open APIs? APIs are no longer a bolt on, they are your application. Architecturally that argues for implementing your own application around the same APIs developers and users employ. Who hasn't wondered how eBay does their business? As one of the largest most loaded websites in the world, it can't be easy. And the subtitle of the presentation hints at how creating such a monster system requires true engineering: Striking a balance between site stability, feature velocity, performance, and cost. You may not be able to emulate how eBay scales their system, but the issues and possible solutions are worth learning from. Site:

    Information Sources

  • The eBay Architecture - Striking a balance between site stability, feature velocity, performance, and cost.
  • Podcast: eBay’s Transactions on a Massive Scale
  • Dan Pritchett on Architecture at eBay interview by InfoQ


  • Java
  • Oracle
  • WebSphere, servlets
  • Horizontal Scaling
  • Sharding
  • Mix of Windows and Unix

    What's Inside?

    This information was adapted from Johannes Ernst's Blog

    The Stats

  • On an average day, it runs through 26 billion SQL queries and keeps tabs on 100 million items available for purchase.
  • 212 million registered users, 1 billion photos
  • 1 billion page views a day, 105 million listings, 2 petabytes of data, 3 billion API calls a month
  • Something like a factor of 35 in page views, e-mails sent, bandwidth from June 1999 to Q3/2006.
  • 99.94% availability, measured as "all parts of site functional to everybody" vs. at least one part of a site not functional to some users somewhere
  • The database is virtualized and spans 600 production instances residing in more than 100 server clusters.
  • 15,000 application servers, all J2EE. About 100 groups of functionality aka "apps". Notion of a "pool": "all the machines that deal with selling"...

    The Architecture

  • Everything is planned with the question "what if load increases by 10x". Scaling only horizontal, not vertical: many parallel boxes.
  • Architectures is strictly divided into layers: data tier, application tier, search, operations,
  • Leverages MSXML framework for presentation layer (even in Java)
  • Oracle databases, WebSphere Java (still 1.3.1)
  • Split databases by primary access path, modulo on a key.
  • Every database has at least 3 on-line databases. Distributed over 8 data centers
  • Some database copies run 15 min behind, 4 hours behind
  • Databases are segmented by function: user, item account, feedback, transaction, over 70 in all.
  • No stored procedures are used. There are some very simple triggers.
  • Move cpu-intensive work moved out of the database layer to applications applications layer: referential integrity, joins, sorting done in the application layer! Reasoning: app servers are cheap, databases are the bottleneck.
  • No client-side transactions. no distributed transactions
  • J2EE: use servlets, JDBC, connection pools (with rewrite). Not much else.
  • No state information in application tier. Transient state maintained in cookie or scratch database.
  • App servers do not talk to each other -- strict layering of architecture
  • Search, in 2002: 9 hours to update the index running on largest Sun box available -- not keeping up.
  • Average item on site changes its search data 5 times before it is sold (e.g. price), so real-time search results are extremely important.
  • "Voyager": real-time feeder infrastructure built by eBay.. Uses reliable multicast from primary database to search nodes, in-memory search index, horizontal segmentation, N slices, load-balances over M instances, cache queries.

    Lessons Learned

  • Scale Out, Not Up – Horizontal scaling at every tier. – Functional decomposition.
  • Prefer Asynchronous Integration – Minimize availability coupling. – Improve scaling options.
  • Virtualize Components – Reduce physical dependencies. – Improve deployment flexibility.
  • Design for Failure – Automated failure detection and notification. – “Limp mode” operation of business features.
  • Move work out of the database into the applications because the database is the bottleneck. Ebay does this in the extreme. We see it in other architecture using caching and the file system, but eBay even does a lot of traditional database operations in applications (like joins).
  • Use what you like and toss what you don't need. Ebay didn't feel compelled to use full blown J2EE stack. They liked Java and Servlets so that's all they used. You don't have to buy into any framework completely. Just use what works for you.
  • Don't be afraid to build solutions that meet and evolve with your needs. Every off the shelf solution will fail you at some point. You have to go the rest of the way on your own.
  • Operational controls become a larger and larger part of scalability as you grow. How do you upgrade, configure, and monitor thousands of machines will running a live system?
  • Architectures evolve. You need to be able to change, refine, and develop your new system while keeping your existing site running. That's the primary challenge of any growing website.
  • It's a mistake to worry too much about scalability from the start. Don't suffer from paralysis by analysis and worrying about traffic that may never come.
  • It's also a mistake not to worry about scalability at all. You need to develop an organization capable of dealing with architecture evolution. Understand you are never done. Your system will always evolve and change. Build those expectations and capabilities into your business from the start. Don't let people and organizations be why your site fails. Many people will think the system should be perfect from the start. It doesn't work that way. A good system is developed overtime in response to real issues and concerns. Expect change and adapt to change.

    Click to read more ...

  • Tuesday

    Secure Remote Administration for Large-Scale Networks

    This website has been a great resource for helping me to understand the successful (and failed) scalable network designs from organizations that have actually done it, but I haven't seen any explicite explanations about secure remote administration of these systems. I understand that the *nix people love to SSH, and the windows gang has their RDP, but how does one go about creating a network architecture that both allows one to manage their systems and does its best to avoid hacker interest? As I imagine, no big website will have the SSH/RDP/FTP ports open on the web server, so how is it that they go about remotely administering their geographically diverse groups of servers securely?

    Click to read more ...


    Should Twitter be an All-You-Can-Eat Buffet or a Vending Machine?

    Om proposes one solution to the Twitter Problem is to limit followers to three square meals a day. The reasonable idea being that lower limits should mean fewer scaling problems. And as a kicker raising those limits is a good way to raise much needed revenue. Scoble thinks users should consume without limit and will drive to another buffet if all-you-can-eat privileges are revoked. The reasonable idea being that if an internet service can't solve internet scale problems then there's not much use for it. Dave says comp power users a top floor suite and shower them with free passes to the buffet. Let the good times roll! The reasonable idea being that power users help create popular restaurants, er, services in the first place and limiting them starves users and starved users won't come back. So, should web services like Twitter be a buffet, a fixed eight course fine dining experience, a small plate restaurant, a family style joint, or a vending machine? Or something else entirely? In a distant barely remembered past I actually worked at an all-you-can-eat buffet. The food was very good and most customers didn't over over indulge. If they did the place wouldn't stay in business long. But some customers did. They were called stackers. Stackers were so named because a large stack of plates would pile up on their table throughout the meal. Stackers followed a power law distribution. Few customers at any one time were stackers, but their effect could be devastating. How devastating depended on their favorite foods... A stacker who loved potato salad was manageable. We had plenty of potato salad and it was cheap and quick to make. No problem. Stacking itself was not frowned upon and never discouraged. It's an all-you-can-eat buffet after all! But if a stacker's favorite food was roast beef, that was trouble. Not only is roast beef expensive, it comes in a limited supply because it has to be prepared ahead of time. Once you ran out there was no more roast beef for the rest of the night. Good roast beef takes hours to prepare, it must be planned for. Management's job was to carefully balance projected demand against waste. The goal was to prepare enough meat to meet demand, yet not have a lot of left-overs. Stackers blow apart the finely balanced calculation of how much roast beef to make and the carving station is left trying to push the ham while apologizing for an embarrassing lack of roast beef. An ugly ugly scene. As a carver you are armed with a long scary looking knife and you are shielded by Medieval chain-mail looking glove, but hungry customers are mean and fast. You never see it coming. Unfortunately the distribution of stackers on any given night is unpredictable. You can't always cook a maximum amount of meat or you'll go broke. And if you make too little everyone is unhappy. It needs to be just right. As a person with serious stacker tendencies I try to remember the cost of things and keep a reasonable balance. The only way to make Goldilocks happy and have just the right balance is to place limits. Eventually the restaurant had to limit the number of trips to the roast beef station to three a meal. Enough that you get value for your dollar, but not so much that the restaurant goes under. Everyone happy? Of course not. The world doesn't work like that. It's all-you-can-eat some would say so I should be able to eat all I can eat ! But there are always limits. Would it be fair to back a truck up to the restaurant and start loading up because that's part of your meal? No. Is it fair to stuff your backpack with food on the way out? No. So there are always limits. The question is what are fair limits? It has been said FriendFeed has no problems handling 10,000 friends so neither should Twitter. Now, let's imagine if I spun up 1000 EC2 servers whose only task was to add more friends to feed. Would FriendFeed limit me then? Of course. It's basic web site self-defense, a right guaranteed under the constitution and long recognized by the courts in certain situations. But still, what are fair limits? How much roast beef should you be able to eat? Limit setting is a strategy we've talked about many times as a way of protecting sites from complete devastation. My favorite example is Mailinator whose prime directive is surviving attacks and they've deployed many clever practices in their own defense. And most every large web site on earth is busy watching your every move so they can bounce you at the first sign of DDOS Armageddon. Limits aren't inherently bad. But limits don't make you scale, they simply stop you from unscaling. An adequate scalable infrastructure must still be put in place. In the end I agree with Scoble in that the power of the internet is having interesting conversations with interesting people about interesting topics. For interesting conversations to happen you must be able to freely create relationships. If you or they have to pay for relationships they simply won't form. Would Google's Page Rank algorithm work so well if it could only analyze paid relationships? A web formed under a paid relationship model would look totally different and be decidedly less valuable. Similarly, a social network that can't grow naturally through preferential attachment would have much less value. Scaling relationships is a core social network competency. Relationships should be subject to DDOS type limits, but not limits artificially out of proportion with a user's internet audience. I doubt Twitter would disagree, but they are going through a tough time right now. I also agree with Om. The Freemium model is a great idea and linking that to site protecting prophylactics is even better. But limiting a core competency may not be the right target. Fotolog is an example of a service that puts Freemium ideas to good use. They charge extra for adding more photos a day, more comments a day, custom profile abilities, and social status add ons. What is the equivalent in Twitter? I don't know, but I would try to treat relationships more like potato salad than roast beef. And I also agree with Dave. It's hard to get noticed on the web. Those who help you storm the attention barrier shouldn't be punished. They should be rewarded with a tasty appropriately sized meal.

    Click to read more ...


    Product: Condor - Compute Intensive Workload Management

    From their website: Condor is a specialized workload management system for compute-intensive jobs. Like other full-featured batch systems, Condor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit their serial or parallel jobs to Condor, Condor places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion. While providing functionality similar to that of a more traditional batch queueing system, Condor's novel architecture allows it to succeed in areas where traditional scheduling systems fail. Condor can be used to manage a cluster of dedicated compute nodes (such as a "Beowulf" cluster). In addition, unique mechanisms enable Condor to effectively harness wasted CPU power from otherwise idle desktop workstations. For instance, Condor can be configured to only use desktop machines where the keyboard and mouse are idle. Should Condor detect that a machine is no longer available (such as a key press detected), in many circumstances Condor is able to transparently produce a checkpoint and migrate a job to a different machine which would otherwise be idle. Condor does not require a shared file system across machines - if no shared file system is available, Condor can transfer the job's data files on behalf of the user, or Condor may be able to transparently redirect all the job's I/O requests back to the submit machine. As a result, Condor can be used to seamlessly combine all of an organization's computational power into one resource.

    Related Articles

  • High Throughput Computing by Miron Livny
  • Condor Presentations
  • (my) Principles of Distributed Computing by Miron Livny

    Click to read more ...

  • Sunday

    How do you explain cloud computing to your grandma?

    Update 2: Nice introductory New York Time's article Cloud Computing: So You Don’t Have to Stand Still. Good example of how Animoto used RightScale and Amazon to meet a Facebook driven demand of 25,000 test drives an hour. Update: Peter Laird in Understanding the Cloud Computing/SaaS/PaaS markets: a Map of the Players in the Industry paints a very cool visual map of all the cloud service players. It's a larger industry than you might think. Once upon a time I worked at an Asynchronous Transfer Mode (ATM) switch startup. Over a delicious Christmas punch my grandma asked me what I did for a living that I could afford such extravagantly inexpensive gifts. Always so subtle. I explained I worked on an ATM switch. Mistake. She sniffed, said that's nice, and asked me why the Automated Teller Machine ate her bank card that morning. No matter how hard I tried I couldn't convince her I didn't work on bank ATMs. To all future job interrogations I waxed off, protesting I do boring software stuff that nobody cares about. Not put off in the least, grandma asked me last night to explain this cloud computing thing she keeps hearing about at her church club. Afraid of being another victim of the distortion field surrounding cloud computing, I instead referred her to Kent Langley's excellent overview of the subject in Cloud Computing: Get Your Head in the Clouds. It does a good job demystifying the very confusing concept of cloud computing. It has nice diagrams, definitions, examples and is a great place to start. She agreed that she had learned a lot, but one thing still troubled her: what's the difference between cloud computing and utility computing? They seem to be the same to her. Always so perceptive. She felt sure if she could drive this point home she would score big points with her church group. Oh the pressure. I steadied myself and explained 3Tera’s take is that cloud computing is for service users and utility computing is for service builders. Cloud computing is essentially about the surrender of control. Users of a service like don’t care how the site is implemented. They don’t care about how it scales, deals with failure, or any of the other 1000s of little details you have to care about when running a complicated operation. Users just want their service to work when they need it. Utility computing customers on the other hand require fine control over their resources because they are the builders of services like Cloud computing is built on utility computing. You couldn’t build a on Google whereas you could build it on top of 3Tera or Amazon. StorageMojo thinks all this cloud/utility nonsense is just foggy thinking. Real computing will stay local because the cost of network access is too high. Memory and CPU are plentiful and cheap while bandwidth is neither. Distributed computing 1990s style will still rule the day. Mike Nygard thinks there’s A Cloud for Everyone in the future. Latency matters and “Keeping your endpoints on your own network at least lets you control your own latency.” Security matters and pushing your precious data into the hands of strangers isn’t secure. Yet we see SalesForce, Google Docs, Basecamp, SugarCRM, and hosted email all flourishing so is privacy really a concern for newer generations trying to get stuff done? HP’s Patrick Eitenbichler thinks “utility computing refers to a business model, while cloud computing describes the underlying IT architecture” with the real decision point being “utility/cloud computing vs. purchasing your own IT assets.” Geva Perry writing for GigaOM essentially agrees with Mr. Eitenbichler saying: Utility computing relates to the business model in which application infrastructure resources — hardware and/or software — are delivered. While cloud computing relates to the way we design, build, deploy and run applications that operate in a virtualized environment, sharing resources and boasting the ability to dynamically grow, shrink and self-heal. Krish tries to condense that down to: cloud computing is software as a service (where companies run their own software) and utility computing is hardware as a service (where you can run your own software). Margaret Rouse makes a good case for cloud computing being just a better marketing concept for utility/grid/cluster/distributed/parallel computing. Bits or Pieces smartly ignores saying the word cloud but my impression is they think providing Software as a Service on a utility computing basis is the game changing innovation. James Urquhart defines the cloud to include: SaaS, PaaS (e.g. and HaaS (e.g. Amazon, Mosso, etc.). SaaS is in clearly in play today, HaaS is being experimented with, but PaaS may be the most interesting facet of the cloud in the long term. Keystones and Rivets finds that “The Cloud” is grid computing wrapped up in a service offered by data centers. Confident I must have answered her original question, I asked “Now, doesn’t that clear things up grandma?” Grandma sniffed, said that's all very nice, but she still wanted to know why the ATM ate her bank card! I groaned and said “Goodnight grandma. I’ll call again next week.” “Excellent,“ she Cheshire smiled, “next week my church group is going to tackle if social networks are really monitizeable.”

    Click to read more ...


    UK Based CDN

    Hi, I was wondering if I could borrow the collective minds of you all to draw up a list to the CDN's that you'd use/do use in the UK. If they're outside the UK but have decent support then also include. The service must be cheap and not require a huge setup fee, it's really only for a small time business; it shares video & high-res pics so mass cheap storage is a must and wondered whether you guys had any ideas, also costs? Mass storage isn't cheap in the UK compared to the states, for example, unless I go colo but as I say, it's a small setup but happens to require a fair bit of space. Would S3 be a good starting point? What is the service like? I hear mixed reviews about it. Many thanks, Jim

    Click to read more ...