advertise
Tuesday
Feb122008

We want to cache a lot :) How do we go about it ?

We have a lot of dependencies to our SQL databases and we have heard that caching does help a lot as we move into scaling and providing better performance. So the question is what are some reliable software products out there that we could consider in this space ? We want to put a lot of frequently called database calls that do not change frequently into this caching layer. Also what would be an easy way to move only those database changes into the cache as opposed to reloading or pulling it into cache every few mins or hours. We need something smart that would just push changes to the caching layer as it happens. I guess we could build our own, but are there any good reliable products out there ? Please also mention how they play with regards to pricing 'cos that would be a determining factor as well. Thanks

Click to read more ...

Tuesday
Feb122008

Product: rPath - Creating and Managing Virtual Appliances

Update: GIGAOM on rPath Burns EC2 Appliances in a Web Portal. rBuilder adds a portal that lets users turn software into virtual appliances. rPath demoed their virtual appliance management system at Monday's AWS Meetup. What they do is help you build a generic virtual machine image deployable on Amazon, VMWare, Xen and other targets. The idea is to build your software application independent of the underlying operating system and deploy it in your own or someone else's datacenter without worrying about all the details. To put their service in context think of rPath as how you build, deploy, and upgrade images and someone like Right Scale has how you can run and managed a cluster of deployed images. To build a Virtual Appliance you pull together all your packages through their web interface or through a Python based "recipe" system, select a VM target, and "cook" it all into a VM image you can immediately deploy and run. To make this magic happen they use the Conary package manager system and they have their own RedHat compatible OS. One of their major features is a very fine grained package management systems which allows them to perform minimal inplace upgrades of deployed images. The downside is you must use their packaging system and their OS for this to work. Any code you want to install must be installable using their packaging system. There's a free community version available on their website for Open Sourcers.. They make their money from people buying a Virtual Appliance of their build and packaging system and deploying it internally. So you can integrate their Virtual Appliance system as part of your build and deployment infrastructure. As part of your nightly build create appliances and have them automatically deployed to your test jigs. Once testing is complete you can deploy into your datacenter. Their smart upgrade features are very nice for a datacenter. Usually package management during upgrades is a complete nightmare. For cloud deployment I think this feature is less useful as I would simply create a new image, fire up a new instance using the new image, and bring down my old images without the cost of a software upgrade. Of course you still have to worry about protocol and data compatibilities. rPath's Virtual Appliance is kind of a hard idea to really understand because it still ahead of curve of what most people are doing. But I think as we move into a world of multiple clouds we must seed with our images, a layer above the clouds is necessary to manage the whole process. rPath is saying we've already built that layer so you don't have to.

Click to read more ...

Monday
Feb112008

Yahoo Live's Scaling Problems Prove: Release Early and Often - Just Don't Screw Up

Tech Crunch chomped down on some initial scaling problems with Yahoo's new live video streaming service Yahoo Live. After a bit of chewing on Yahoo's old bones, TC spat out: If Yahoo cant scale something like this (no matter how much they claim it’s an experiment, it’s still a live service), it shows how far the once brightest star of the online world has fallen. This kind of thinking kills innovation. When there's no room for a few hiccups or a little failure you have to cover your ass so completely nothing new will ever see the light of day. I thought we were supposed to be agile. We are supposed to release early and often. Not every 'i' has to be dotted and not every last router has to be installed before we take the first step of a grand new journey. Get it out there. Let users help you make it better. Listen to customers, make changes, push the new code out, listen some more, and fix problems as they come up. Following this process we'll make something the customer wants and needs without a year spent in a dark room with a cabal of experts trying to perfectly predict an unknowable future. Isn't this what we are supposed to do? Then give people some space to work things out before you declare their world ended and that they are an embarrassment to their kind.

Click to read more ...

Thursday
Feb072008

clusteradmin.blogspot.com - blog about building and administering clusters

A blog about cluster administration. Written by a System Administrator working at HPC (High Performance Computing) data-center, mostly dealing with PC clusters (100s of servers), SMP machines and distributed installations. The blog concentrates on software/configuration/installation management systems, load balancers, monitoring and other cluster-related solutions.

Click to read more ...

Thursday
Feb072008

Looking for good business examples of compaines using Hadoop

I have read the blog about Mailtrust/Rackspace as well the interesting things with Google and Yahoo. Who else is using Hadoop/MapReduce to solve business problems. TIA johnmwillis.com

Click to read more ...

Tuesday
Feb052008

SLA monitoring

Hi, We're running a enterprise SaaS solution that currently holds about 700 customers with up to 50.000 users per customer (growing quickly). Our customers have SLA agreements with us that contains guaranteed uptimes, response times and other performance counters. With an increasing number of customers and traffic we find it difficult to provide our customer with actual SLA data. We could set up external probes that monitors certain parts of the application, but this is time consuming with 700 customers (we do it today for our biggest clients). We can also extract data from web logs but they are now approaching about 30-40 GB a day. What we really need is monitoring software that not only focuses on the internal performance counters but also lets us see the application from the customers viewpoint and allows us to aggregate data in different ways. Would the best approach be to develop a custom solution (for instance a distributed app that aggregates data from different logs every night and store them in a data warehouse) or are there products out there that are suitable for a high scalability environment? Any input is greatly appreciated!

Click to read more ...

Tuesday
Feb052008

Handling of Session for a site running from more than 1 data center

If using a DB to store session(used by some app server, ex.. websphere), how would an enterprise class site that is housed in 2 different data centers(that are live/live) maintain the session between both data centers. The problem as I see it is that since each data center has their own session database, if I was to flip the users to only access Data Center 1(by changing the DNS records for the site or some other Load balancing technique) then that would cause all previous Data Center 2 users to lose their session. What would be some pure hardware based solutions to this that are being used now? That way the applications supporting the web site can be abstracted from this. As I see now, a solution is to possibly have the session databases in both centers some how replicate the data to each other. I just don't see the best way to even accomplish this you are not even guraunteed that the session ID's will be unique since it's 2 different Application Server tiers(again websphere). Not to mention if the 2 data centers are some distance apart this could be difficult to accomplish as well.

Click to read more ...

Monday
Feb042008

IPS/IDS for heavy content site

All, My site would have heavy content (video/pictures). I'm looking for an efficient IPS/IDS solution which would not introduce much of latency. I'm more familiar with Cisco ASA and also familiar with Juniper, Foundry and others. I also came across snort but haven't used it before. I'm more of looking for an appliance (for the ease of configuration,support etc...) Could any one share their thoughts on performane of IPS/IDS from this vendors? Thanks! Janakan Rajendran

Click to read more ...

Monday
Feb042008

Streaming Video on Amazon EC2?

An Amazon EC2 Flash Video Streaming solution has been announced by Wowza Media. What do you think about the future of similar solutions? Is Amazon EC2 and S3 ready for video streaming? I have found threads on their forums related to the performance, scalability and high availability of the hosted streaming solution. How would you make it scalable? Is it really cheaper than traditional hosting? Looking forward to your thoughts!

Click to read more ...

Sunday
Feb032008

Product: Collectl - Performance Data Collector

From their website: There are a number of times in which you find yourself needing performance data. These can include benchmarking, monitoring a system's general heath or trying to determine what your system was doing at some time in the past. Sometimes you just want to know what the system is doing right now. Depending on what you're doing, you often end up using different tools, each designed to for that specific situation. Features include:

  • You are be able to run with non-integral sampling intervals.
  • Collectl uses very little CPU. In fact it has been measured to use <0.1% when run as a daemon using the default sampling interval of 60 seconds for process and slab data and 10 seconds for everything else.
  • Brief, verbose, and plot formats are supported.
  • You can report aggregated performance numbers on many devices such as CPUs, Disks, interconnects such as Infiniband or Quadrics, Networks or even Lustre file systems.
  • Collectl will align its sampling on integral second boundaries.
  • Supports process and slab monitoring.
  • New to the 2.4.0 release is the monitoring of process i/o statistics. Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interactively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include cpu, disk, inodes, infiniband, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp. The following is an example of simply running the collectl command with no arguments and using its default settings. Below we see what the cpu, network and disk were doing while writing a large file: #<--------CPU--------><-----------Disks-----------><-----------Network----------> #cpu sys inter ctxsw KBRead Reads KBWrit Writes netKBi pkt-in netKBo pkt-out 37 37 382 188 0 0 27144 254 45 68 3 21 25 25 366 180 20 4 31280 296 0 1 0 0 25 25 368 183 0 0 31720 275 2 20 0 1 Output can also be saved in a rolling set of logs for later playback or displayed interactively in a variety of formats. If all that isn't enough there are additional mechanisms for supplying data to external tools via a socket interface or by generating its output as s-expressions, a format of choice for some tools such as supermon. You can even create files in space-separated formats for plotting with external packages like the one below which was done with gnuplot using 1 second samples.

    Click to read more ...