Now that the internet has become defined as a mashup over a collection of service APIs, we have a little problem: for clients using APIs is a lot like drinking beer through a straw. You never get as much beer as you want and you get a headache after. But what if I've been a good boy and deserve a bigger straw? Maybe we can use game theory to model trust relationships over a life time of interactions over many different services and then give more capabilities/beer to those who have earned them? Let's say Twitter limits me to downloading only 20 tweets at a time through their API. But I want more. I may even want to do something so radical as download all my tweets. Of course Twitter can't let everyone do that. They would be swamped serving all this traffic and service would be denied. So Twitter does that rational thing and limits API access as a means of self protection. As does Google, Yahoo, Skynet, and everyone else. But when I hit the API limit I think, but hey it's Todd here, we've been friends a long time now and I've never hurt you. Not once. Can't you just trust me a little? I promise not to abuse you. I never have and won't in the future. At least on purpose, accidents do happen. Sometimes there's a signal problem and we'll misunderstand each other, but we can work that out. After all, if soldiers during WW1 can learn how to stop the killing through forgiveness, so can we. The problem is Twitter doesn't know me so we haven't built up trust. We could replace trust with money, as in a paid service where I pay for each batch of downloads, but we're better friends than that. Money shouldn't come between us. And if Twitter knew what a good guy I am I feel sure they would let me download more data. But Twitter doesn't know me and that's the problem. How could they know me? We could set up authority based systems like the ones that let certain people march ahead through airport security lines, but that won't scale and I have feeling we all know how that strategy will work out in the end. Another approach to trust is a game theoretic perspective for assessing a user's trust level. Take the iterated prisoner's dilemma problem where variations on the tit for tat strategy are surprisingly simple ways cooperation could evolve in API world. We start out cooperating and if you screw me I'll screw you right back. In a situation where communication is spotty (like through an API) there can be bad signals sent so if people have trusted before then they'll wait for another iteration to see if the other side defects again, in which case they retaliate. Perhaps if services modeled API limits like a game and assessed my capabilities by how we've played the game together, then capabilities could be set based on earned and demonstrated trust rather than simplistic rules. A service like Mashery could takes us even further by moving us out of the direct reciprocity model, where we judge each other on our one on one interactions, and into a more sophisticated indirect reciprocity model, where agents can make decisions to help those who have helped others. Mashery can take a look at how API users act in the wider playing of multiple services and multiple agents. If you are well behaved using many different services, shouldn't you earn more trust and thus more capabilities? In the real world if someone vouches for you to a friend then you will likely get more slack because you have some of the trust from your friend backing you. This doesn't work in one on one situation because there's no way to establish your reputation. Mashery on the other hand knows you and knows which APIs you are using and how you are using them. Mashery could vouch for you if they detected you were playing fair so you get more capabilities initially and transit the capability scale faster if you continued to behave. You can obviously go on and on imaging how such a system might work. Of course, there's a dark side. Situations are possible like on Ebay where people spend eons setting up a great reputation only to later cash in their reputations in some fabulous scam. That's what happens in a society though. We all get more capabilities at the price of some extra risk.
Hello all, does anyone have experience in scaling a european website to china? The main problem in china is the internet connectivity to websites outside china, that means latency and packetloss (and perhaps filtering) make things difficult. The options I see are: 1. Host you application in china, but where? I haven't got a answer from any chinese ISP I contacted. On the other hand I don't really want to host in china. 2. Build your own CDN. Wikipedia shows how it goes. Get a bunch of machines (but where? see point 1) put squid on them, implement intelligent cache invalidation and you're set. But where can I get machines in china? Where do I need them in china? There are soe big isps with limited peering capability, so I'd need servers in every network. 3. Get professional CDN services. Akamai, ChinaCache, CDNetworks, etc etc.. They all provide services in china. The problem is: they are all very expensive. 4. Amazon EC2/S3 ? Is it worth thinking about this way? I am not sure, because they only have US and Ireland based datacenters. So we are stuck to the connectivity problem.. My favourite way: Rent a bunch of linux servers in 4-5 big cities in china in different networks and build my own CDN. What do you think? Regards Bjoern
I have some experience with a very large OLTP system that is 7+ TB in size and performs very well for 30K+ concurrent users. It is built using Intersystems Cache based on the very old but very scalable MUMPS platform. Why don't I see more discussions about archiectures such as these in this forum? I am curious why this platform scales so much better then the typical RDBMS.
I see everyone talk about lamp stack is less than j2ee stack .i m newbie can anyone plz explain what is j2ee stack
From Wikipedia: SmartFrog is an open-source software framework, written in Java, that manages the configuration, deployment and coordination of a software system broken into components. These components may be distributed across several network hosts. The configuration of components is described using a domain-specific language, whose syntax resembles that of Java. It is a prototype-based object-oriented language, and may thus be compared to Self. The framework is used internally in a variety of HP products. Also, it is being used by HP Labs partners like CERN.
Ever feel like the blogosphere is 500 million channels with nothing on? Tailrank finds the internet's hottest channels by indexing over 24M weblogs and feeds per hour. That's 52TB of raw blog content (no, not sewage) a month and requires continuously processing 160Mbits of IO. How do they do that? This is an email interview with Kevin Burton, founder and CEO of Tailrank.com. Kevin was kind enough to take the time to explain how they scale to index the entire blogosphere.
Hi, I saw an year ago that Netapp sold netcache to blu-coat, my site is a heavy NetCache user and we cached 83% of our site. We tested with Blue-coat and F5 WA and we are not getting same performce as NetCache. Any of you guys have the same issue? or somebody knows another product can handle much traffic? Thanks Rodrigo
Bees have a similar problem to website servers: how to do a lot of work with limited resources in an ever changing environment. Usually lessons from biology are hard to apply to computer problems. Nature throws hardware at problems. Billions and billions of cells cooperate at different levels of organizations to find food, fight lions, and make sure your DNA is passed on. Nature's software is "simple," but her hardware rocks. We do the opposite. For us hardware is in short supply so we use limited hardware and leverage "smart" software to work around our inability to throw hardware at problems. But we might be able to borrow some load balancing techniques from bees. What do bees do that we can learn from? Bees do a dance to indicate the quality and location of a nectar source. When a bee finds a better source they do a better dance and resources shift to the new location. This approach may seem inefficient, but it turns out to be "optimal for the unpredictable nectar world." Craig Tovey and Sunil Nakrani are trying to apply these lessons to more efficiently allocate work to servers: Tovey and Nakrani set to work translating the bee strategy for these idle Internet servers. They developed a virtual “dance floor” for a network of servers. When one server receives a user request for a certain Web site, an internal advertisement (standing in a little less colorfully for the dance) is placed on the dance floor to attract any available servers. The ad’s duration depends on the demand on the site and how much revenue its users may generate. The longer an ad remains on the dance floor, the more power available servers devote to serving the Web site requests advertised. Sounds like an open source project that could get a lot of good buzz. You can imagine lots of cool logos and sweet project names. Maybe it could be sponsored by the Honey council?
From the website: The lbpool project provides a load balancing JDBC driver for use with DB connection pools. It wraps a normal JDBC driver providing reconnect semantics in the event of additional hardware availability, partial system failure, or uneven load distribution. It also evenly distributes all new connections among slave DB servers in a given pool. Each time connect() is called it will attempt to use the best server with the least system load. The biggest scalability issue with large applications that are mostly READ bound is the number of transactions per second that the disks in your cluster can handle. You can generally solve this in two ways. 1. Buy bigger and faster disks with expensive RAID controllers. 2. Buy CHEAP hardware on CHEAP disks but lots of machines. We prefer the cheap hardware approach and lbpool allows you to do this. Even if you *did* manage to use cheap hardware most load balancing hardware is expensive, requires a redundant balancer (if it were to fail), and seldom has native support for MySQL. The lbpool driver addresses all these needs. The original solution was designed for use within MySQL replication clusters. This generally involves a master server handling all writes with a series of slaves which handle all reads. In this situation we could have hundreds of slaves and lbpool would load balance queries among the boxes. If you need more read performance just buy more boxes. If any of them fail it won't hurt your application because lbpool will simply block for a few seconds and move your queries over to a new production server. In this post Kevin Burton of Spinn3r mentions they've been using this product to good effect for handling MySQL replication faults, balancing, and crashed servers.