advertise
Tuesday
Apr292008

High performance file server

What have bunch of applications which run on Debian servers, which processes huge amount of data stored in a shared NFS drive. we have 3 applications working as a pipeline, which process data stored in the NFS drive. The first application processes the data and store the output in some folder in the NFS drive, the second app in the pipeline process the data from the previous step and so on. The data load to the pipeline is like 1 GBytes per minute. I think the NFS drive is the bottleneck here. Would buying a specialized file server improve the performance of data read write from the disk ?

Click to read more ...

Wednesday
Apr232008

Behind The Scenes of Google Scalability

The recent Data-Intensive Computing Symposium brought together experts in system design, programming, parallel algorithms, data management, scientific applications, and information-based applications to better understand existing capabilities in the development and application of large-scale computing systems, and to explore future opportunities. Google Fellow Jeff Dean had a very interesting presentation on Handling Large Datasets at Google: Current Systems and Future Directions. He discussed: • Hardware infrastructure • Distributed systems infrastructure: –Scheduling system –GFS –BigTable –MapReduce • Challenges and Future Directions –Infrastructure that spans all datacenters –More automation It is really like a "How does Google work" presentation in ~60 slides? Check out the slides and the video!

Click to read more ...

Tuesday
Apr222008

Simple NFS failover solution with symbolic link?

I've been trying to find a high availability file storage solution without success. I tried GlusterFS which looks very promising but experienced problems with stability and don't want something I can't easily control and rely on. Other solutions are too complicated or have a SPOF. So I'm thinking of the following setup: Two NFS servers, a primary and a warm backup. The primary server will be rsynced with the warm backup every minute or two. I can do it so frequently as a PHP script will know which directories have changed recently from a database and only rsync those. Both servers will be NFS mounted on a cluster of web servers as /mnt/nfs-primary (sym linked as /home/websites) and /mnt/nfs-backup. I'll then use Ucarp (http://www.ucarp.org/project/ucarp) to monitor both NFS servers availability every couple of seconds and when one goes down, the Ucarp up script will be set to change the symbolic link on all web servers for the /home/websites dir from /mnt/nfs-primary to /mnt/nfs-backup The rsync script will then switch and the backup NFS will become primary and backup to the previous primary when it gets back online. Can it really be this simple or am I missing something? Just setting up a trial system now but would be interested in feedback. :) Also, I can't find out whether it's best to use NFS V3 or V4 these days?

Click to read more ...

Monday
Apr212008

Using Google AppEngine for a Little Micro-Scalability

Over the years I've accumulated quite a rag tag collection of personal systems scattered wide across a galaxy of different servers. For the past month I've been on a quest to rationalize this conglomeration by moving everything to a managed service of one kind or another. The goal: lift a load of worry from my mind. I like to do my own stuff my self so I learn something and have control. Control always comes with headaches and it was time for a little aspirin. As part of the process GAE came in handy as a host for a few Twitter related scripts I couldn't manage to run anywhere else. I recoded my simple little scripts into Python/GAE and learned a lot in the process. In the move I exported HighScalability from a VPS and imported it into a shared hosting service. I could never quite configure Apache and MySQL well enough that they wouldn't spike memory periodically and crash the VPS. And since a memory crash did not automatically restarted it was unacceptable. I also wrote a script to convert a few thousand pages of JSPWiki to MediaWiki format, moved from my own mail server, moved all my code to a hosted SVN server, and moved a few other blogs and static sites along the way. No, it wasn’t very fun. One service I had a problem moving was http://innertwitter.com because of two scripts it used. In one script (Perl) I login to Twitter and download the most recent tweets for an account and display them on a web page. In another script (Java) I periodically post messages to various Twitter accounts. Without my own server I had nowhere to run these programs. I could keep a VPS but that would cost a lot and I would still have to worry about failure. I could use AWS but the cost of fault tolerant system would be too high for my meager needs. I could rewrite the functionality in PHP and use a shared hosting account, but I didn’t want to go down the PHP road. What to do? Then Google AppEngine announced and I saw an opportunity to kill two stones with one bird: learn something while doing something useful. With no Python skills I just couldn’t get started, so I ordered Learning Python by Mark Lutz. It arrived a few days later and I read it over an afternoon. I knew just enough Python to get started and that was all I needed. Excellent book, BTW. My first impression of Python is that it is a huge language. It gives you a full plate of functional and object oriented dishes and it will clearly take a while to digest. I’m pretty language agnostic so I’m not much of a fan boy of any language. A lot of people are quite passionate about Python. I don’t exactly understand why, but it looks like it does the job and that’s all I really care about. Basic Python skills in hand I run through the GAE tutorial. Shockingly it all just worked. They kept it very basic which is probably why it worked so well. With little ceremony I was able to create a site, access the database, register the application, upload the application, and then access it over the web. To get to the same point using AWS was *a lot* harder. Time to take off the training wheels. In the same way understanding a foreign language is a lot easier than speaking it, I found writing Python from scratch a lot harder than simply reading/editing it. I’m sure I’m committing all the classic noob mistakes. The indenting thing is a bit of a pain at first, but I like the resulting clean looking code. Not using semi-colons at the end of a line takes getting used to. I found the error messages none to helpful. Everything was a syntax error. Sorry folks, statically typed languages are still far superior in this regard. But the warm fuzzy feeling you get from changing code and immediately running it never gets old. My first task was to get recent entries from my Twitter account. My original Perl code looks like:

use strict;
use warnings;
use CGI;
use LWP;
eval 
{ 
   my $query = new CGI;
   print $query->header;
   my $callback= $query->param("callback");
   my $url= "http://twitter.com/statuses/replies.json";
   my $ua= new LWP::UserAgent;
   $ua->agent("InnerTwitter/0.1" . $ua->agent);
   my $header= new HTTP::Headers;
   $header->authorization_basic("user", "password");
   my $req= new HTTP::Request("GET", $url, $header); 
   my $res= $ua->request($req);
   if ($res->is_success) 
   { print "$callback(" . $res->content . ")"; } 
   else 
   {
      my $msg= $res->error_as_HTML();
      print $msg;
   }
};
My strategy was to try and do a pretty straightforward replacement of Perl with Python. From my reading URL fetch was what I needed to make the json call. Well, the documentation for URL fetch is nearly useless. There’s not a practical “help get stuff done” line in it. How do I perform authorization, for example? Eventually I hit on:
class InnerTwitter(webapp.RequestHandler):
   def get(self):
      self.response.headers['Content-Type'] = 'text/plain'
      callback =  self.request.get("callback")
      base64string = base64.encodestring('%s:%s' % ("user", "password"))[:-1]
      headers = {'Authorization': "Basic %s" % base64string} 
      url = "http://twitter.com/statuses/replies.json";
      result = urlfetch.fetch(url, method=urlfetch.GET, headers=headers)
      self.response.out.write(callback + "(" + result.content + ")")

def main():
  application = webapp.WSGIApplication(
                                       [('/innertwitter', InnerTwitter)],
                                       debug=True)
For me the Perl code was easier simply because there is example code everywhere. Perhaps Python programmers already know all this stuff so it’s easier for them. I eventually figured out all the WSGI stuff is standard and there was doc available. Once I figured out what I needed to do the code is simple and straightforward. The one thing I really dislike is passing self around. It just indicates bolt-on to me, but other than that I like it. I also like the simple mapping of URL to handler. As an early CGI user I could never quite understand why more moderns need a framework to “route” to URL handlers. This approach hits just the right level of abstraction to me. My next task was to write a string to a twitter account. Here’s my original java code:
private static void sendTwitter(String username)
    {
        username+= "@domain.com";
        String password = "password";
        
        try
        {
            String chime= getChimeForUser(username);
            String msg= "status=" + URLEncoder.encode(chime);
            msg+= "&source=innertwitter";
            URL url = new URL("http://twitter.com/statuses/update.xml");
            URLConnection conn = url.openConnection();
            conn.setDoOutput(true); // set POST
            conn.setUseCaches (false);
            conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
            conn.setRequestProperty("CONTENT_LENGTH", "" + msg.length()); 
            String credentials = new sun.misc.BASE64Encoder().encode((username
                    + ":" + password).getBytes());
            conn.setRequestProperty("Authorization", "Basic " + credentials);
            OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream());
            wr.write(msg);
            wr.flush();
            wr.close();
            BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
            String line = "";
            while ((line = rd.readLine()) != null)
            {
                System.out.println(line);
            }

        } catch (Exception e)
        {
            e.printStackTrace();
        }
    }

    private static String getChimeForUser(String username)
    {
        Date date = new Date();
        Format formatter = new SimpleDateFormat("........hh:mm EEE, MMM d");
        String chime= "........*chime*                       " + formatter.format(date);
        return chime;
    }
Here’s my Python translation:
class SendChime(webapp.RequestHandler):
   def get(self):
      self.response.headers['Content-Type'] = 'text/plain'
      username =  self.request.get("username")

      login =  username
      password = "password"
      chime = self.get_chime()
      payload= {'status' : chime,  'source' : "innertwitter"}
      payload= urllib.urlencode(payload)

      base64string = base64.encodestring('%s:%s' % (login, password))[:-1]
      headers = {'Authorization': "Basic %s" % base64string} 

      url = "http://twitter.com/statuses/update.xml"
      result = urlfetch.fetch(url, payload=payload, method=urlfetch.POST, headers=headers)

      self.response.out.write(result.content)

   def get_chime(self):
      now = datetime.datetime.now()
      chime = "........*chime*.............." + now.ctime()
      return chime
 
def main():
  application = webapp.WSGIApplication(
                                       [('/innertwitter', InnerTwitter),
                                       ('/sendchime', SendChime)],
                                       debug=True)
I had to drive the timed execution of this URL from an external cron service, which points out that GAE is still a very limited environment. Start to finish the coding took me 4 hours and the scripts are now running in production. Certainly this is not a complex application in any sense, but I was happy it never degenerated into the all too familiar debug fest where you continually fight infrastructure problems and don’t get anything done. I developed code locally and it worked. I pushed code into the cloud and it worked. Nice. Most of my time was spent trying to wrap my head around how you code standard HTTP tasks in Python/GAE. The development process went smoothly. The local web server and the deployment environment seemed to be in harmony. And deploying the local site into Google’s cloud went without a hitch. The debugging environment is primitive, but I imagine that will improve over time. This wasn’t merely a programming exercise for an overly long and boring post. I got some real value out of this:
  • Hosting for my programs. I didn’t have any great alternatives to solve my hosting problem and GAE fit a nice niche for me.
  • Free. I wouldn’t really mind if it was low cost, but since most of my stuff never makes money I need to be frugal.
  • Scalable. I don’t have to worry about overloading the service.
  • Reliable. I don’t have to worry about the service going down and people not seeing their tweets or getting their chimes.
  • Simple. The process was very simple and developer friendly. AWS will be the way to go for “real” apps, but for simpler apps a lighter weight approach is refreshing. One can see the GUI layer in GAE and the service layer in AWS. GAE offers a kind of micro-scalability. All the little things you didn’t have a place to put before can now find a home. And as they grow up they might just find they like staying around for a little of momma’s home cooking.

    Related Articles

  • How SimpleDB Differs from a RDBMS
  • Google AppEngine – A Second Look
  • Is App Tone Enough? at Appistry.

    Click to read more ...

  • Monday
    Apr212008

    The Search for the Source of Data - How SimpleDB Differs from a RDBMS

    Update 2: Yurii responds with the Top 10 Reasons to Avoid Document Databases FUD. Update: Top 10 Reasons to Avoid the SimpleDB Hype by Ryan Park provides a well written counter take. Am I really that fawning? If so, doesn't that make me a dear? All your life you've used a relational database. At the tender age of five you banged out your first SQL query to track your allowance. Your RDBMS allegiance was just assumed, like your politics or religion would have been assumed 100 years ago. They now say--you know them--that relations won't scale and we have to do things differently. New databases like SimpleDB and BigTable are what's different. As a long time RDBMS user what can you expect of SimpleDB? That's what Alex Tolley of MyMeemz.com set out to discover. Like many brave explorers before him, Alex gave a report of his adventures to the Royal Society of the AWS Meetup. Alex told a wild almost unbelievable tale of cultures and practices so different from our own you almost could not believe him. But Alex brought back proof. Using a relational database is a no-brainer when you have a big organization behind you. Someone else worries about the scaling, the indexing, backups, and so on. When you are out on your own there's no one to hear you scream when your site goes down. In these circumstances you just want a database that works and that you never have to worry about again. That's what attracted Alex to SimpleDB. It's trivial to setup and use, no schema required, insert data on the fly with no upfront preparation, and it will scale with no work on your part. You become free from DIAS (Database Induced Anxiety Syndrome). You don't have to think about or babysit your database anymore. It will just work. And from a business perspective your database becomes a variable cost rather than a high fixed cost, which is excellent for the angel food funding. Those are very nice features in a database. But for those with a relational database background there are some major differences that take getting used to. No schema. You don't have to define a schema before you use the database. SimpleDB is an attribute-value store and you can use any you like any time you like. It doesn't care. Very different from Victorian world of the RDBMS. No joins. In relational theory the goal is to minimize update and deletion anomolies by normaling your data into seperate tables related by keys. You then join those tables together when you need the data back. In SimpleDB there are no joins. For many-to-1 relationships this works out great. In SimpleDB attribute values can have multiple values so there's no need to do a join to recover all the values. They are stored together. For many-to-many to relationships life is not so simple. You must code them by hand in your program. This is a common theme in SimpleDB. What the RDBMS does for you automatically must generally be coded by hand with SimpleDB. The wages of scale are more work for the programmer. What a surprise. Two step query process. In a RDBMS you can select which columns are returned in a query. Not so in SimpleDB. In a query SimpleDB just returns back a record ID, not the values of the record. You need to make another trip to the database to get the record contents. So to minimize your latency you would need to spawn off multiple threads. See, more work for the programmer. No sorting. Records are not returned in a sorted order. Values for multi-value attribute fields are not returned in sorted order. That means if you want sorted results you must do the sorting. And it also means you must get all the results back before you can do the sorting. More work for the programmer. Broken cursor. SimpleDB only returns back 250 results at a time. When there are more results you cursor through the result set using a token mechanism. The kicker is you must iterate through the result set sequentially. So iterating through a large result set will take a while. And you can't use your secret EC2 weapon of massive cheap CPU to parallelize the process. More work for the programmer because you have to move logic to the write part of the process instead of the read part because you'll never be able to read fast enough to perform your calculations in a low latency environment. The promise of scaling is fulfilled. Alex tested retrieving 10 record ids from 3 different database sizes. Using a 1K record database it took an average of 141 msecs to retrieve the 10 record ids. For a 100K record database it took 266 msecs on average. For a 1000K record database it took an average of 433 msecs to retrieve the 10 record ids. It's not fast, but it is relatively consistent. That seems to be a theme with these databases. BigTable isn't exactly a speed demon either. One could conclude that for certain needs at least, SimpleDB scales sufficiently well that you can feel comfortable that your database won't bottleneck your system or cause it to crash under load. If you have a complex OLAP style database SimpleDB is not for you. But, if you have a simple structure, you want ease of use, and you want it to scale without your ever lifting a finger ever again, then SimpleDB makes sense. The cost is everything you currently know about using databases is useless and all the cool things we take for granted that a database does, SimpleDB does not do. SimpleDB shifts work out of the database and onto programmers which is why the SimpleDB programming model sucks: it requires a lot more programming to do simple things. I'll argue however that this is the kind of suckiness programmers like. Programmers like problems they can solve with more programming. We don't even care how twisted and inelegant the code is because we can make it work. And as long as we can make it work we are happy. What programmers can't do is make the database scalable through more programming. Making a database scalable is not a solvable problem through more programming. So for programmers the right trade off was made. A scalable database you don't have to worry about for more programming work you already know how to do. How does that sound?

    Related Articles

  • The new attack on the RDBMS by techno.blog("Dion")
  • The End of an Architectural Era (It’s Time for a Complete Rewrite) - A really fascinating paper bolstering many of the anti-RDBMS threads the have popped up on the intertube.

    Click to read more ...

  • Monday
    Apr212008

    Google App Engine - what about existing applications?

    Recently, Google announced Google App Engine, another announcement in the rapidly growing world of cloud computing. This brings up some very serious questions: 1. If we want to take advantage of one of the clouds, are we doomed to be locked-in for life? 2. Must we re-write our existing applications to use the cloud? 3. Do we need to learn a brand new technology or language for the cloud? This post presents a pattern that will enable us to abstract our application code from the underlying cloud provider infrastructure. This will enable us to easily migrate our EXISTING applications to cloud based environment thus avoiding the need for a complete re-write.

    Click to read more ...

    Saturday
    Apr192008

    How to build a real-time analytics system?

    Hello everybody! I am a developer of a website with a lot of traffic. Right now we are managing the whole website using perl + postgresql + fastcgi + memcached + mogileFS + lighttpd + roundrobin DNS distributed over 5 servers and I must say it works like a charm, load is stable and everything works very fast and we are recording about 8 million pageviews per day. The only problem is with postgres database since we have it installed only on one server and if this server goes down, the whole "cluster" goes down. That's why we have a master2slave replication so we still have a backup database except that when the master goes down, all inserts/updates are disabled so the whole website is just read only. But this is not a problem since this configuration is working for us and we don't have any problems with it. Right now we are planning to build our own analytics service that would be customized for our needs. We tried various different software packages but were not satisfied with any of them. We want to build something like Google Analytics so it would allow us to create reports in real-time with "drill-down" possibility to make interactive reports. We don't need real-time data to be included in report - we just need a possibility to make different reports very fast. Data can be pre-processed. For example right now we are logging requests into plain text log files in the following format: date | hour | user_id | site_id | action_id | some_other_attributes.. There are about 8 - 9 million requests per day and we want to make real-time reports for example: - number of hits per day (the simplest) - number of hits by unique users per day - number of hits by unique users on specific site per day - number of distinct actions by users on specific site during defined period (e.g. one month, period of X months...) etc. You can display any type of report by combining different columns as well as counting all or only distinct occurrences of certain attributes. I know how to parse these log files and calculate any type of report I want, but it takes time. There are about 9 million rows in each daily log file and if I want to calculate monthly reports I need to parse all daily log files for one month - meaning I have to parse almost 300 million of lines, count what I want and then display the summary. This can take for hours and sometimes it has to be done in more than one step (e.g. calculating a number of users that have been on site_id=1 but not on site_id=2 - in this case I have to export users on site 1, export users on site 2 and then compare results and count the differences). If you take a look at Google Analytics it calculates any type of similar report in real-time. How do they do it? How can someone form a database that could do something like that? If I put 300 million of rows (requests per month) into the Postgres/MySQL table, selects are even slower than parsing plain text log files using Perl... I am aware that they have a huge amount of servers but I am also aware that they have even bigger amount of hits per day. I have a possibility to store and process this kind of analytics on multiple servers at the same time but I don't have enough knowledge how to construct a software and database that would be able to do a job like this. Does somebody have any suggestion? A simple example would be great! We already managed to make some sort of a database for site_id+action_id drilldown but the problem is with "unique users" which is THE information that we need all the time. To calculate unique users during certain period you have to count all the distinct user_ids during that time period. E.g.: select count(distinct user_id) from ... where date>='2008-04-10' and date <='2008-04-18' - with a 9million rows per day this statement would take about two minutes to complete and we are not satisfied with it. Thank you for any hint!

    Click to read more ...

    Friday
    Apr182008

    Scaling Mania at MySQL Conference 2008

    The 2008 MySQL Conference & Expo has now closed, but what is still open for viewing is all the MySQL scaling knowledge that was shared. Planet MySQL is a great source of the goings on:

  • Scaling out MySQL: Hardware today and tomorrow by Jeremy Cole and Eric Bergen of Proven Scaling. In it are answered all the big questions of life: What about 64-bit? How many cores? How much memory? Shared storage? Finally we learn the secrets of true happiness.
  • Panel Video: Scaling MySQL? Up or Out?. Don't have time? Take a look at the Diamond Note excellent game day summary. Companies like MySQL, Sun, Flickr, Fotolog, Wikipedia, Facebook and YouTube share intel on how many web servers they have, how they handle failure, and how they scale.
  • Kevin Burton in Scaling MySQL and Java in High Write Throughput Environments - How we built Spinn3r shows how they crawl and index 500k posts per hour using MySQL and 40 servers.
  • Venu Anuganti channels Dathan Pattishall's talk on scaling heavy concurrent writes in real time.
  • This time Venu channels Helping InnoDB scale on servers with many cores by Mark Callaghan from Google.
  • Exploring Amazon EC2 for Scale-out Applications by Morgan Tocker, MySQL Canada, Carl Mercier, Defensio. RoR based spam filtering services that runs completely on EC2. Show evolution from a simple configuration to a sharded architecture.
  • Applied Partitioning and Scaling Your (OLTP) Database System by Phil Hilderbrand.
  • Real World Web: Performance & Scalability by Ask Bjorn Hansen. (189 slides!). He promises you haven't seen this talk before. The secret: Think Horizontal.
  • Too many to list here. All the presentations are available on scribd.

    Click to read more ...

  • Thursday
    Apr102008

    Mysql scalability and failover...

    Hi, I am an owner of an large community website and currently we are having problems with our database architecture. We are using 2 database servers and spread tables across them to divide read/writes. We have about 90% reads and 10% writes. We use Memcached on all our webservers to cache as much as we can, traffic is load balanced between webservers. We have 2 extra servers ready to put to use! We have looked into a couple of solution so far: Continuent Uni/Cluster aka Sequoia -> Commercial version way too expensive and Java isn't as fast as it suppose to be. MySQL Proxy -> We couldn't find any good example on how to create a master - master with failover scenario. MySQL Clustering -> Seems to be not mature enough, had a lot of performance issues when we tried to go online with it. MySQL DRDB HA -> Only good for failover, cannot be scaled! MySQL Replication -> Well don't get me started ;) So now I turn to you guys to help me out, I am with my hands in my hair and see the site still growning and performance slowly getting to its limit. Really need your help!! HELP!

    Click to read more ...

    Tuesday
    Apr082008

    Google AppEngine - A First Look

    I haven't developed an AppEngine application yet, I'm just taking a look around their documentation and seeing what stands out for me. It's not the much speculated super cluster VM. AppEngine is solidly grounded in code and structure. It reminds me a little of the guy who ran a website out of S3 with a splash of Heroku thrown in as a chaser. The idea is clearly to take advantage of our massive multi-core future by creating a shared nothing infrastructure based firmly on a core set of infinitely scalable database, storage and CPU services. Don't forget Google also has a few other services to leverage: email, login, blogs, video, search, ads, metrics, and apps. A shared nothing request is a simple beast. By its very nature shared nothing architectures must be composed of services which are themselves already scalable and Google is signing up to supply that scalable infrastructure. Google has been busy creating a platform of out-of-the-box scalable services to build on. Now they have their scripting engine to bind it all together. Everything that could have tied you to a machine is tossed. No disk access, no threads, no sockets, no root, no system calls, no nothing but service based access. Services are king because they are easily made scalable by load balancing and other tricks of the trade that are easily turned behind the scenes, without any application awareness or involvement. Using the CGI interface was not a mistake. CGI is the perfect metaphor for our brave new app container world: get a request, process the request, die, repeat. Using AppEngine you have no choice but to write an app that can be splayed across a pointy well sharpened CPU grid. CGI was devalued because a new process had to be started for every request. It was too slow, too resource intensive. Ironic that in the cloud that's exactly what you want because that's exactly how you cause yourself fewer problems and buy yourself more flexibility. The model is pure abstraction. The implementation is pure pragmatism. Your application exists in the cloud and is in no way tied to any single machine or cluster of machines. CPUs run parallel through your application like a swarm of busy bees while wizards safely hidden in a pocket of space-time can bend reality as much as they desire without the muggles taking notice. Yet the abstraction is implemented in a very specific dynamic language that they already have experience with and have confidence they can make work. It's a pretty smart approach. No surprise I guess. One might ask: is LAMP dead? Certainly not in the way Microsoft was hoping. AppEngine is so much easier to use than the AWS environment of EC2, S3, SQS, and SDB. Creating an app in AWS takes real expertise. That's why I made the comparison of AppEngine to Heroku. Heroku is a load and go approach for RoR whereas AppEngine uses Python. You basically make a Python app using services and it scales. Simple. So simple you can't do much beyond making a web app. Nobody is going to make a super scalable transcoding service out of AppEngine. You simply can't load the needed software because you don't have your own servers. This is where Amazon wins big. But AppEngine does hit a sweet spot in the market: website builders who might have previously went with LAMP. What isn't scalable about AppEngine is the scalability of the complexity of the applications you can build. It's a simple request response system. I didn't notice a cron service, for example. Since you can't write your own services a cron service would give you an opportunity to get a little CPU time of your own to do work. To extend this notion a bit what I would like to see as an event driven state machine service that could drive web services. If email needs to be sent every hour, for example, who will invoke your service every hour so you can get the CPU to send the email? If you have a long running seven step asynchronous event driven algorithm to follow, how will you get the CPU to implement the steps? This may be Google's intent. Or somewhere in the development cycle we may get more features of this sort. But for now it's a serious weakness. Here's are a quick tour of a few interesting points. Please note I'm copying large chunks of their documentation in this post as that seems the quickest way to the finish line...

  • Very nice project page at Google App Engine. Already has a FAQ, articles, blog, forums, example applications, nice tutorial, and a nice touch is how to work with Django. Some hard chargers are already posting questions to the forum.
  • Python only. More languages will follow. As you are uploading clear text into the engine there's no hiding from mother Google.
  • You aren't getting root. Applications run in a sandbox, which is a secure environment that provides limited access to the underlying operating system. These limitations allow App Engine to distribute web requests for the application across multiple servers, and start and stop servers to meet traffic demands. - An application can only access other computers on the Internet through the provided URL fetch and email services and APIs. Other computers can only connect to the application by making HTTP (or HTTPS) requests on the standard ports. - An application cannot write to the file system. An app can read files, but only files uploaded with the application code. The app must use the App Engine datastore for all data that persists between requests. - Application code only runs in response to a web request, and must return response data within a few seconds. A request handler cannot spawn a sub-process or execute code after the response has been sent.
  • The data access trend continues with the RDBMS being dissed infavor of a properties type interface. - The datastore is not like a traditional relational database. Data objects, or "entities," have a kind and a set of properties. Queries can retrieve entities of a given kind filtered and sorted by the values of the properties. Property values can be of any of the supported property value types. - The datastore uses optimistic locking for concurrency control. An update of a entity occurs in a transaction that is retried a fixed number of times if other processes are trying to update the same entity simultaneously. - They have some notion of transaction: The datastore implements transactions across its distributed network using "entity groups." A transaction manipulates entities within a single group. Entities of the same group are stored together for efficient execution of transactions. Your application can assign entities to groups when the entities are created.
  • You've got mail: Applications can also send email messages using App Engine's mail service. The mail service also uses Google infrastructure to send email messages. If you've ever been marked a spammer because you send a little email, this is actually a nice feature.
  • It's mostly free for now: 500MB of storage, up to 5 million page views a month, and 10GB bandwidth per day. Additional resources will be available for $$$.
  • Limits exist on various features. If a request takes too long it's killed. You can only get 1,000 results at a time. That sort of thing. Pretty reasonable.
  • Developers must download a Windows, Mac OS X or Linux SDK. Python 2.5 is required.
  • The SDK includes a web server application that simulates the App Engine environment. So this in the RoR and GWT type mold where you have a nice local development environment that emulates what happens in the deployment environment.
  • Google App Engine supports any framework written in pure Python that speaks CGI (and any WSGI-compliant framework using a CGI adaptor), including Django, CherryPy, Pylons, and web.py. You can bundle a framework of your choosing with your application code by copying its code into your application directory.
  • Google has their own framework called webapp. Nice MS style naming.
  • Here's a hello world application using webapp:
    import wsgiref.handlers
    
    from google.appengine.ext import webapp
    
    class MainPage(webapp.RequestHandler):
      def get(self):
        self.response.headers['Content-Type'] = 'text/plain'
        self.response.out.write('Hello, webapp World!')
    
    def main():
      application = webapp.WSGIApplication(
                                           [('/', MainPage)],
                                           debug=True)
      wsgiref.handlers.CGIHandler().run(application)
    
    if __name__ == "__main__":
      main()
    
    This code defines one request handler, MainPage, mapped to the root URL (/). When webapp receives an HTTP GET request to the URL /, it instantiates the MainPage class and calls the instance's get method. Inside the method, information about the request is available using self.request. Typically, the method sets properties on self.response to prepare the response, then exits. webapp sends a response based on the final state of the MainPage instance. The application itself is represented by a webapp.WSGIApplication instance. The parameter debug=true passed to its constructor tells webapp to print stack traces to the browser output if a handler encounters an error or raises an uncaught exception. You may wish to remove this option from the final version of your application.
  • Google is standardizing components on their infrastructure. Take the login interface. When your application is running on App Engine, users will be directed to the Google Accounts sign-in page, then redirected back to your application after successfully signing in or creating an account.
  • Forms looks normal. Lots of embedded html. Take a look. Python like Perl has a nice bulk string handling syntax so this style isn't as fugly as it would be in C++ or Java.
  • Database access is built around Data Models: A model describes a kind of entity, including the types and configuration for its properties. Here's a taste:
    Example of creation:
    from google.appengine.ext import db
    from google.appengine.api import users
    
    class Pet(db.Model):
      name = db.StringProperty(required=True)
      type = db.StringProperty(required=True, choices=set("cat", "dog", "bird"))
      birthdate = db.DateProperty()
      weight_in_pounds = db.IntegerProperty()
      spayed_or_neutered = db.BooleanProperty()
      owner = db.UserProperty()
    
    pet = Pet(name="Fluffy",
              type="cat",
              owner=users.get_current_user())
    pet.weight_in_pounds = 24
    pet.put()
    
    Example of get, modify, save:
    if users.get_current_user():
      user_pets = db.GqlQuery("SELECT * FROM Pet WHERE pet.owner = :1",
                              users.get_current_user())
      for pet in user_pets:
        pet.spayed_or_neutered = True
    
      db.put(user_pets)
    
    Looks like your normal overly complex data access. Me, I appreciate the simplicity of a string based property interface.
  • You can use Django's HTML Template system.
  • Static files are served using automated mapping mechanism. You don't get local disk store for your css, flash, and js files.
  • Applications are loaded using a command line tool: appcfg.py update helloworld/.
  • Applications are accessed like: http://application-id.appspot.com. You get your domain name.
  • There's a dashboard that has six graphs that give you a quick visual reference of your system usage: Requests per Second, Errors per Second, Bytes Received per Second, Bytes Sent per Second, Megacycles per Second (The amount of CPU megacyles your application uses every second), Milliseconds Used per Second, Number of Quota Denials per Second. I have no idea what a megacycle is either. I think it's bigger than a pint of beer.
  • Also I wonder if this is meant to compete with Facebook more than Amazon?
  • Developers with a lot of little projects will find AppEngine especially useful, which always leaves open a Adoption Led Market play.

    Related Articles

  • HighScalability: Rumors of Signs and Portents Concerning Freeish Google Cloud.
  • Techcrunch: Google Jumps Head First Into Web Services With Google App Engine.
  • HighScalability: Heroku - Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud.
  • Video of the announcement at Camp David, er CampFireOne.
  • Techdirt: Google Finally Realizes It Needs To Be The Web Platform.
  • TechCrunch Labs: Our Experience Building And Launching An App On Google App Engine
  • Zdnet: Let the PaaS wars begin.
  • ReadWriteWeb: Google: Cloud Control to Major Tom.
  • ComputingAtScale: The Live Web has another heart.
  • ComputingAtScale: Parallelism: The New New Thing!.
  • EETimes: CPU designers debate multi-core future.
  • EETimes: Multicore puts screws to parallel-programming models.
  • Slashdot: Panic in Multicore Land.
  • Mashable: Google App Engine: An Early Look.
  • DeftLabs: Gazing Into The Clouds.
  • Experimenting with Google App Engine. Bret Taylor writes a blog post about the blog he wrote for AppEngine that serves the blog post. Elegant touch.
  • ReadWriteWeb: Google App Engine: History's Next Step or Monopolistic Boondoggle?.
  • GigaOM: App Engine: Competition Is Good for Everyone
  • The Blist: Batteries sold separately - "they’ve got the scalable bit and the hosting bit, but there’s a surprising lack of, well, “web” and “application” going on here."
  • Foobar: Would you use Google App Engine? - " if you think you might like to compete with Google and/or become a really large web site or business yourself, then I don't think that Google App Engine should be your first choice"
  • Silicon Valley Insider: Google's App Engine: Aiming At Facebook, Not Amazon - "Google is not trying to provide pure utility here -- they are trying to provide utility tethered to their infrastructure."
  • Niall Kennedy: Google App Engine for developers - "Overall I am quite impressed with Google App Engine and its potential to remove operations management and systems administration from my task list. I am not confident in Google App Engine as a hosting solution for any real business..."

    Click to read more ...