Kevin Burton calculates that Blekko, one of the barbarian hoard storming Google's search fortress, would need to spend $5 million just to buy enough weapons, er storage. Kevin estimates storing a deep crawl of the internet would take about 5 petabytes. At a projected $1 million per petabyte that's a paltry $5 million. Less than expected. Imagine in days of old an ambitious noble itching to raise an army to conquer a land and become its new prince. For a fine land, and the search market is one of the richest, that would be a smart investment for a VC to make. In these situations I always ask: What would Machiavelli do? Machiavelli taught some lands are hard to conquer and easy to keep and some are easy to conquer and hard to keep. A land like France was easy to conquer because it was filled with nobles. You can turn nobles on each other because they always hate each other for some reason or another. But it's hard to keep a land of nobles because they all think they are as good as you are and will continually plot your downfall. The Ottoman empire was hard to conquer because it's led by a single ruler. Everyone owes their wealth and prosperity to that ruler so subjects, assuming the prince has not turned the people against him, will fight to death for the existing structure because their future depends on it. To conquer takes an all out war. But once victorious the Ottomon empire would be easy to rule because there are no loyalties to drive resistance. It was always a marriage of convenience. Google is the Ottomon empire. Allegiance is given to Google because people are getting paid. Defeating Google will take total war, assuming the prince has not turned the people against him, but once defeated ruling will be easy. How might Google keep strengthening the ties that bind to make it harder for a prospective prince? One way might be to prevent subjects from cavorting with potentially corrupting influences outside the land. What if Google were to give greater rewards to websites that changed their robots.txt to reject all other search engines? That would deny all routes into the principality and strengethen ties considerably. A new prince would find it very difficult to break in. Machiavelli might like that.
Hello, I am new to the back end side of things. Love this web site. Read all comments about Amazon hosting, actually I really like Amazon S3 but concerned that it may not be sufficient for my computing needs. And E3 just not too sure. What about hosting sites like host monster? Their prices seem amazing. Are they too good to be true? What are the cons and what are the things I should be considering? I am concerned about costs, but I want user experience to be world class. I am creating a media sharing site. Any help will be great. Thanks Fahad
Hi all, Has anyone got any experience with using Amazon S3 as an uploaded photo store? I'm writing a website that I need to keep as low budget as possible, and I'm investigating solutions for storing uploaded photos from users - not too many, probably in the low thousands. The site is commercial so I'm straying away from the Flickrs of the world. S3 seems to offer a solution but I'd like to hear from those who have used it before. Thanks Andy
All, I'm just new to this and have a basic understanding how CDN works? My questions are: 1. How does CDN sync data with web servers for video/images? If I have a user to upload a video to my site, will it get stored directly in CDN or it comes to my webserver first and then sync-ed with cache server? 2. How to have only the dynamic video/image delivered through CDN while the rest is served by a webserver? 3. How sync happens and who pays for the bandwidth for sync? I'd appreciate if someone could explain this. Regards, Janakan Rajendran
From http://directory.fsf.org/project/collectd/ : 'collectd' is a small daemon which collects system information every 10 seconds and writes the results in an RRD-file. The statistics gathered include: CPU and memory usage, system load, network latency (ping), network interface traffic, and system temperatures (using lm-sensors), and disk usage. 'collectd' is not a script; it is written in C for performance and portability. It stays in the memory so there is no need to start up a heavy interpreter every time new values should be logged. From the collectd website: Collectd gathers information about the system it is running on and stores this information. The information can then be used to do find current performance bottlenecks (i. e. performance analysis) and predict future system load (i. e. capacity planning). Or if you just want pretty graphs of your private server and are fed up with some homegrown solution you're at the right place, too ;). While collectd can do a lot for you and your administrative needs, there are limits to what it does: * It does not generate graphs. It can write to RRD-files, but it cannot generate graphs from these files. There's a tiny sample script included in contrib/, though. Also you can have a look at drraw for a generic solution to generate graphs from RRD-files. * It does not do monitoring. The data is collected and stored, but not interpreted and acted upon. There's a plugin for Nagios, so it can use the values collected by collectd, though. It's reportedly a reliable product that doesn't cause a lot load on your system. This enables you to collect data at a faster rate so you can detect problems earlier.
Compare: 1. MySQL Clustering(ndb-cluster stogare) 2. MySQL / GFS-GNBD/ HA 3. MySQL / DRBD /HA 4. MySQL Write Master / Multiple MySQL Read Slaves 5. Standalone MySQL Servers(Functionally seperated)
Update 2: Summize Computes Computing Resources for a Startup. Lots of nice graphs showing Amazon is hard to beat for small machines and become less cost efficient for well used larger machines. Long term storage costs may eat your saving away. And out of cloud bandwidth costs are high. Update: via ProductionScale, a nice Digital Web article on how to setup S3 to store media files and how Blue Origin was able to handle 3.5 million requests and 758 GBs in bandwidth in a single day for very little $$$. Also a Right Scale article on Network performance within Amazon EC2 and to Amazon S3. 75MB/s between EC2 instances, 10.2MB/s between EC2 and S3 for download, 6.9MB/s upload. Now that Amazon's S3 (storage service) is out of beta and EC2 (elastic compute cloud) has added new instance types (the class of machine you can rent) with more CPU and more RAM, I thought it would be interesting to take a look out how their pricing stacks up. The quick conclusion:the more you scale the more you save. A six node configuration in Amazon is about half the cost of a similar setup using a service provider. But cost may not be everything... EC2 gets a lot of positive pub, so if you would like a few other perspectives take a look at Jason Hoffman of Joyent's blog post on Why EC2 isn't yet a platform for "normal" web applications and Hostingfu's Short Comings of Amazon EC2. Both are well worth reading and tell a much needed cautionary tale. The upshot is batch operations clearly work well within EC2 and S3 (storage service), but the jury is still out on deploying large database centric websites completely within EC2. The important sticky issues seem to be: static IP addresses, load balancing/fail over, lack of data center redundancy, lack of custom OS building, and problematic persistent block storage for databases. A lack of large RAM and CPU machines has been solved with the new instance types. Assuming you are OK with all these issues, will EC2 cost less? Cost isn't the only issue of course. If dynamically scaling VMs is a key feature, SQS (message queue service) looks attractive, or S3's endless storage are critical, then weight accordingly. My two use cases are my VPS, for selfish reasons, and a quote from a leading service provider for a 6 node setup for a startup. Six nodes is small, but since the architecture featured horizontal scaling, the cost of expanding was pretty linear and incremental. Here's a quick summary of Amazon's pricing:
Single VPS ConfigurationI was very curious about the economics of moving this simple site(http://highscalability.com) from a single managed VPS to EC2. Currently my plan provides:
Six Node Configuration for StartupThis configuration is targeted at a real-life Web 2.0ish startup needing about 300GB of fast, highly available database storage. Currently there are no requirements for storing large quantities of BLOBs. There are 6 nodes overall: two database servers in failover configuration, two load balanced web servers, and two application servers. From the service provider:
Effective content caching is one of the key features of scalable web sites. Although there are several out-of-the-box options for caching with modern web technologies, a custom built cache still provides the best performance.
Hi... I have this idea to start a really great and scalable website, and I am building it! So far I'm doing everything myself - coding, networking, architecture planning, everything. I haven't even gotten into the legal aspects yet....... It would be MUCH easier if I had a technical person to handle that end of the operation. I'm a good coder, but like Bill Gates at Harvard for Math, I'm not the very best. I'd like to FIND that very best person available, to handle the technical aspects. For worse or better, I don't presently know somebody who fits this bill. I've posted a bazillion ads on Craig's List, with no really qualified responses. I've put out feelers among my own network, same result. Not sure what else I can do. Shoestring budget, so it's sweat equity in the beginning. That can actually be a plus, as it forces people to focus. Any ideas about what else I can do, to attract the right person? Thanks Jason
A very entertaining and somewhat educational article on IBM Poopheads say LAMP Users Need to "grow up". The physical three tier architecture turns out to be the root of all evil and shared nothing architectures brings simplicity and light. In the comments Simon Willison makes an insightful comment on why fine grained caching works for personalized pages and proxy's don't: Great post, but I have to disagree with you on the finely grained caching part. If you look at big LAMP deployments such as Flickr, LiveJournal and Facebook the common technology component that enables them to scale is memcached - a tool for finely grained caching. That's not to say that they aren't doing shared-nothing, it's just that memcached is critical for helping the database layer scale. LiveJournal serves around 50% of its page views "permission controlled" (friends only) so an HTTP proxy on the front end isn't the right solution - but memcached reduces their database hits by 90%.