advertise
Friday
Nov142008

Paper: Pig Latin: A Not-So-Foreign Language for Data Processing

Yahoo has developed a new language called Pig Latin that fit in a sweet spot between high-level declarative querying in the spirit of SQL, and low-level, procedural programming `a la map-reduce and combines best of both worlds. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. Pig has just graduated from the Apache Incubator and joined Hadoop as a subproject. The paper has a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. References: Apache Pig Wiki

Click to read more ...

Thursday
Nov132008

CloudCamp London 2: private clouds and standardisation

CloudCamp returned to London yesterday, organised with the help of Skills Matter at the Crypt on the Clarkenwell green. The main topics of this cloud/grid computing community meeting were service-level agreements, connecting private and public clouds and standardisation issues.

Click to read more ...

Thursday
Nov132008

Plenty of Fish Says Scaling for Free Doesn't Pay

Plenty of FishCEO Markus Frind, famous nerd hero for making over $10 million a year from Google ads on a free dating site he made and ran all by himself, now sees a problem with the free model:

The problem with free is that every time you double the size of your database the cost of maintaining the site grows 6 fold. I really underestimated how much resources it would take, I have one database table now that exceeds 3 billion records. The bigger you get as a free site the less money you make per visit and the more it costs to service a visit...There is really no money in being free and we have to start experimenting with other models now or we won’t be able to compete in 3 or 4 years.
As one commenter succinctly put it: the “golden time” of AdSense is over. Time to look at costs. The POF architecture is to run scarily huge tables on single machines. They also buy and maintain their own SAN. So it seems scaling up is what is increasing costs and decreasing profits. I wonder if the economics of cloud storage and cloud architectures might have a more linear cost curve?

Click to read more ...

Tuesday
Nov112008

Arhcitecture for content management

Hi, I am looking for logical architecture of content management of portal. Say an org has got lot of business process and integrates with few applicaitons and it is portal based application. How does it look to have architecture framework for this type of fucntionality.

Click to read more ...

Monday
Nov102008

Scalability Perspectives #1: Nicholas Carr – The Big Switch

Scalability Perspectives is a series of posts that highlights the ideas that will shape the next decade of IT architecture. Each post is dedicated to a thought leader of the information age and his vision of the future. Be warned though – the journey into the minds and perspectives of these people requires an open mind.

Nicholas Carr

A former executive editor of the Harvard Business Review, Nicholas Carr writes and speaks on technology, business, and culture. His provocative 2004 book Does IT Matter? set off a worldwide debate about the role of computers in business.

The Big Switch – Rewiring the World, From Edison to Google

Carr's core insight is that the development of the computer and the Internet remarkably parallels that of the last radically disruptive technology, electricity. He traces the rapid morphing of electrification from an in-house competitive advantage to a ubiquitous utility, and how the business advantage rapidly shifted from the innovators and early adopters to corporate titans who made their fortune from controlling a commodity essential to everyday life. He envisions similar future for the IT utility in his new book ... and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine. - Thomas Edison Carr's vision is that IT services delivered over the Internet are replacing traditional software applications from our hard drives. We rely on the new utility grid to connect with friends at social networks, track business opportunities, manage photo collections or stock portfolios, watch videos and write blogs or business documents online. All these services hint at the revolutionary potential of the new computing grid and the information utilities that run on it. In the years ahead, more and more of the information-processing tasks that we rely on, at home and at work, will be handled by big data centers located out on the Internet. The nature and economics of computing will change as dramatically as the nature and economics of mechanical power changed with the rise of electric utilities in the early years of the last century. The consequences for society - for the way we live, work, learn, communicate, entertain ourselves, and even think - promise to be equally profound. If the electric dynamo was the machine that fashioned twentieth century society - that made us who we are - the information dynamo is the machine that will fashion the new society of the twenty-first century. The utilitarians as Carr calls them can deliver breakthrough IT economics through the use of highly efficient data centers and scalable, distributed computing, networking and storage architecture. There's a new breed of Internet company on the loose. They grow like weeds, serve millions of customers a day and operate globally. And they have very, very few employees. Look at YouTube, the video network. When it was bought by Google in 2006, for more than $1 billion, it was one of the most popular and fastest growing sites on the Net, broadcasting more than 100 million clips a day. Yet it employed a grand total of 60 people. Compare that to a traditional TV network like CBS, which has more than 23,000 employees.

Goodbye, Mr. Gates

So is the title for Chapter 4 of the book. “The Next Sea change is upon us.” Those words appeared in an extraordinary memorandum that Bill Gates sent to Microsoft's top managers and engineers on October 30, 2005. “Services designed to scale to tens or hundreds of millions [of users] will dramatically change the nature and cost of solutions deliverable to enterprise or small businesses.” This new wave, he concluded, “will be very disruptive.”

IT in 2018: From Turing’s Machine to the Computing Cloud

Carr's new internet.com eBook concludes that thanks to the theory of Alan Turing's Universal Computing Machine and the rise of modern virtualization technologies:
  • With enough memory and enough speed, Turing’s work implies, a single computer could be programmed, with software code, to do all the work that is today done by all the other physical computers in the world.
  • Once you virtualize the computing infrastructure, you can run any application, including a custom-coded one, on an external computing grid.
  • In other words: Software (coding) can always be substituted for hardware (switching).

Into the Cloud

Carr demonstrates the power of the cloud through the example of the answering machine which have been vaporized into the cloud. This is happening to our e-mails, documents, photo albums, movies, friends and world (google earth?), too. If you’re of a certain age, you’ll probably remember that the first telephone answering machine you used was a bulky, cumbersome device. It recorded voices as analog signals on spools of tape that required frequent rewinding and replacing. But it wasn’t long before you replaced that machine with a streamlined digital answering machine that recorded messages as strings of binary code, allowing all sorts of new features to be incorporated into the device through software programming. But the virtualization of telephone messaging didn’t end there. Once the device became digital, it didn’t have to be a device anymore – it could turn into a service running purely as code out in the telephone company’s network. And so you threw out your answering machine and subscribed to a service. The physical device vaporized into the “cloud” of the network.

The Great Enterprise of the 21st Century

Carr considers building scalable web sites and services a great opportunity for this century. Good news for highscalability.com :-) Just as the last century’s electric utilities spurred the development of thousands of new consumer appliances and services, so the new computing utilities will shake up many markets and open myriad opportunities for innovation. Harnessing the power of the computing grid may be the great enterprise of the twenty-first century.

Information Sources

Click to read more ...

Wednesday
Nov052008

Managing application on the cloud using a JMX Fabric

This post describes how you can create a federated management model using JMX standard API. Applications that are already using a standard JMX interface can plug-in the new federated implementation without changing the application code and without introducing additional performance overhead.

Click to read more ...

Monday
Nov032008

How Sites are Scaling Up for the Election Night Crush

Election night is a big traffic boost for news and social sites. Yahoo expects up to 400 million page views on Election Day. Data Center Knowledge has an excellent article how various sites are preparing to handle spikes in election night traffic. Some interesting bits:

  • Prepare ahead. Don't wait to handle spikes, plan and prepare before the blessed event.
  • Use a CDN. Daily Kos puts images on a CDN, but the dynamic nature of their site means the can't use CDN for their other content.
  • Scale up. Daily Kos "to handle the traffic better, we moved to a cluster of six quad core Xeons with 8GB RAM for webheads that all boot off a central NFS (Network File System) root, with the capability of adding more webheads as needed,” . They also "added two 16GB eight-core Xeons and a 6×73GB RAID-10 array for database files running a MySQL master/slave setup."
  • Add Cache. Daily Kos added 1GB instances memcached to each webhead.
  • Change Caching Strategy. Daily Kos puts fully rendered pages into memcached.
  • Change Serving Strategy. Daily Kos directly serves cached pages from memcached directly to anonymous users from lighttpd running as the front end proxy. The moves a lot of work off the backend and distributes work on the new hefty webheads. Site performance has improved greatly.
  • Add Capacity. Limelight expanded its network capacity to over 2 Terabytes per second. Tonight is a big night for a lot of sites. It's interesting to see how some are responding to the challenge. A lot of what they are doing will work for you too.

    Click to read more ...

  • Sunday
    Nov022008

    Strategy: How to Manage Sessions Using Memcached

    Dormando shows an enlightened middle way for storing sessions in cache and the database. Sessions are a perfect cache candidate because they are transient, smallish, and since they are usually accessed on every page access removing all that load from the database is a good thing. But as Dormando points out session caches have problems. If you remove expiration times from the cache and you run out of memory then no more logins. If a cache server fails or needs to be upgrade then you just logged out a bunch of potentially angry users. The middle ground Dormando proposes is using both the cache and the database:

  • Reads: read from the cache first, then the database. Typical cache logic.
  • Writes: write to memcached every time, write to the database every N seconds (assuming the data has changed). There's a small chance of data loss, but you've still greatly reduced the database load while providing reliability. Nice solution.

    Click to read more ...

  • Thursday
    Oct302008

    Olio Web2.0 Toolkit - Evaluate Web Technologies and Tools

    How do you evaluate and decide which web technologies (and there are myriads out there) to use for your new web application, which one potentially gives you the best performance, which one will likely give you the shortest time-to-market? The Apache incubator project Olio might help. Olio is a is an open source web 2.0 toolkit to help evaluate the suitability, functionality and performance of web technologies. Olio defines an example web2.0 application (an events site somewhat like yahoo.com/upcoming) and provides three initial implementations : PHP, Java EE and RubyOnRails (ROR). The toolkit also defines ways to drive load against the application in order to measure performance. Apache Olio could be used to

    • Understand how to use various web 2.0 technologies such as AJAX, memcached, mogileFS etc. Use the code in the application to understand the subtle complexities involved and how to get around issues with these technologies.
    • Evaluate the differences in the three implementations: php, ruby and java to understand which might best work for your situation.
    • Within each implementation, evaluate different infrastructure technologies by changing the servers used (e.g: apache vs lighttpd, mysql vs postgre, ruby vs Jruby etc.)
    • Drive load against the application to evaluate the performance and scalability of the chosen platform.
    • Experiment with different algorithms (e.g. memcache locking, a different DB access API) by replacing portions of code in the application.
    Olio started it's life as the web2.0kit developed by Sun Microsystems in colloboration with U.C. Berkeley RAD Lab and was presented on Velocity2008.

    Click to read more ...

    Thursday
    Oct302008

    The case for functional decomposition

    Hi all, I'm a big fan of http://highscalability.com/ and have been looking in my current development to decompose my application along functional boundaries as a route to being able to scale out the server side, specifically the database layer. The problem comes when there are links between the data in different components, ie one component holds all the user data, but another component needs to reference a user as being an owner of some piece of data. I'm currently doing this by holding the primary key information for each side of the the link (as you would if they all lived in a single database), but this link table needs to exist in both components to allow lookups to be done in either direction, ie 'get the things a specific user owns' and 'get the owners of this specific thing' would each use different components. The alternative to this would be to store the link data in only one of the components, but then the reverse lookups would require 2 calls instead of just one. My question is this, is the duplication of these link tables some kind of code smell I should be avoiding or is this just the way things go when you split your app along functional lines like this? Is this sort of approach really applicable to anyone other than the ebays of this world? should the rest of us just keep putting more functionality into the same back end? Cheers, Robin

    Click to read more ...