« Starting a website from scratch - what technologies should I use? | Main | Scale to China »

Solving the Client Side API Scalability Problem with a Little Game Theory

Now that the internet has become defined as a mashup over a collection of service APIs, we have a little problem: for clients using APIs is a lot like drinking beer through a straw. You never get as much beer as you want and you get a headache after. But what if I've been a good boy and deserve a bigger straw? Maybe we can use game theory to model trust relationships over a life time of interactions over many different services and then give more capabilities/beer to those who have earned them?

Let's say Twitter limits me to downloading only 20 tweets at a time through their API. But I want more. I may even want to do something so radical as download all my tweets. Of course Twitter can't let everyone do that. They would be swamped serving all this traffic and service would be denied. So Twitter does that rational thing and limits API access as a means of self protection. As does Google, Yahoo, Skynet, and everyone else.

But when I hit the API limit I think, but hey it's Todd here, we've been friends a long time now and I've never hurt you. Not once. Can't you just trust me a little? I promise not to abuse you. I never have and won't in the future. At least on purpose, accidents do happen. Sometimes there's a signal problem and we'll misunderstand each other, but we can work that out. After all, if soldiers during WW1 can learn how to stop the killing through forgiveness, so can we.

The problem is Twitter doesn't know me so we haven't built up trust. We could replace trust with money, as in a paid service where I pay for each batch of downloads, but we're better friends than that. Money shouldn't come between us.

And if Twitter knew what a good guy I am I feel sure they would let me download more data. But Twitter doesn't know me and that's the problem. How could they know me?

We could set up authority based systems like the ones that let certain people march ahead through airport security lines, but that won't scale and I have feeling we all know how that strategy will work out in the end.

Another approach to trust is a game theoretic perspective for assessing a user's trust level. Take the iterated prisoner's dilemma problem where variations on the tit for tat strategy are surprisingly simple ways cooperation could evolve in API world. We start out cooperating and if you screw me I'll screw you right back. In a situation where communication is spotty (like through an API) there can be bad signals sent so if people have trusted before then they'll wait for another iteration to see if the other side defects again, in which case they retaliate.

Perhaps if services modeled API limits like a game and assessed my capabilities by how we've played the game together, then capabilities could be set based on earned and demonstrated trust rather than simplistic rules.

A service like Mashery could takes us even further by moving us out of the direct reciprocity model, where we judge each other on our one on one interactions, and into a more sophisticated indirect reciprocity model, where agents can make decisions to help those who have helped others.

Mashery can take a look at how API users act in the wider playing of multiple services and multiple agents. If you are well behaved using many different services, shouldn't you earn more trust and thus more capabilities?

In the real world if someone vouches for you to a friend then you will likely get more slack because you have some of the trust from your friend backing you. This doesn't work in one on one situation because there's no way to establish your reputation. Mashery on the other hand knows you and knows which APIs you are using and how you are using them. Mashery could vouch for you if they detected you were playing fair so you get more capabilities initially and transit the capability scale faster if you continued to behave.

You can obviously go on and on imaging how such a system might work. Of course, there's a dark side. Situations are possible like on Ebay where people spend eons setting up a great reputation only to later cash in their reputations in some fabulous scam. That's what happens in a society though. We all get more capabilities at the price of some extra risk.

Reader Comments (10)

I'm not sure I agree here.

When you say "The problem is Twitter doesn't know me so we haven't built up trust." - I would suspect that the problem isn't trust, it's just that the service has finite resource constraints. Even in the case where all API key holders are Good(tm) and non-abusive, there's still a resource issue, no matter how much the client/user and service provider are in love.

or, are you assuming that there *is* capacity available, and services are only constraining your requests (rate and/or types) arbitrarily because they don't know you ?

November 29, 1990 | Unregistered Commenterallspaw

> the service has finite resource constraints.

But that's not how constraints are applied. Everyone is treated equally regardless of capacity. Assuming a max capacity how do you allocate resources? If you are 20% under max and we've built up trust, can't I have that 20%? If we've built up trust can't I get a portion of less trusted user's capacity? Clearly a system should alway protect itself, but within those boundaries are there rules other than simple limits that could evolve a more useful community for users?

November 29, 1990 | Unregistered CommenterTodd Hoff

Ah, I understand what you're saying now.

I think that the difficulty might be in what defines "trust". The tricky part that immediately comes to my mind is the situation where this trust/game model would come into play. It might be when a user can demonstrate that he clearly needs more capacity, (let's say that his request rate is growing over time), and that his use *also* falls within non-abuse territory.

This might be where different API Terms of Service for providers start to show their differences, I think. Because in my experience, in those cases where API usage for a certain key/application that is accelerating at a high rate usually is one that the application provider wants to charge for, and therefore requires him (under most ToS) to have a commercial-use API key, which negates the trust part, since money is involved. Hm.

Amongst the free and non-commercial API users (with their lower rates of request) I guess there could be a leveling methodology which raises throttles for users who:

- are longtime users
- have no history of harsh queries (think of search terms with 100 ANDs and ORs, with a lat/long geo bounding box)
- provide feedback to the API provider on statistics regarding his application

Hm. :)

November 29, 1990 | Unregistered Commenterallspaw

good thought but not a way.

November 29, 1990 | Unregistered CommenterAnonymous

> Take the iterated prisoner's dilemma problem where variations on the
> tit for tat strategy are surprisingly simple ways cooperation could evolve in API world.

Interesting idea.. If we apply this idea to the case of you using Twitter, one thing jumps at us: you need Twitter a lot more than Twitter needs you. Imagine Twitter decided to completely cut you out (for whatever reason). You have no way of retaliating (short of denial of service attack or spamming, but that's a different discussion). In the prisoners dilemma situation, prisoners are more equally matched in that sense.

However, something like this has been used in P2P file-sharing networks. For example, your download limits may be increased if you upload files to other people (therefore promoting fair usage and decreasing amount of leeching).

So, maybe, this approach plays out better when parties involved are more peers than clients and servers?

November 29, 1990 | Unregistered CommenterPeter

> you need Twitter a lot more than Twitter needs you

Do you think that's really true? There are quite a few alternative to Twitter, some even better, so I don't think I really need Twitter at all. I need something Twitterish, but not Twitter itself.

But what Twitter desperately needs is my social graph to keep catalyzing their viral expansion. So much so that they make little social graph traps in the form of widgets so users can set traps in unsuspecting places in the hopes of capturing ever wider nets of new people.

We're really in the later first phases of the great social graph wars, which is why OpenSocial had to rush its troops to the front lines without proper training and poor equipment. So I think I/we are quite important to Twitter and all social graph services.

And by extension since the service API is how a great portion of web site services are now delivered, getting that part right will help the keep and gain customers.

We are still very early in the whole mashup-API approach and simple limits have worked, but I don't think they will work once the web of services becomes even denser and even more interdependent. Right now, for example, I get a little miffed at Amazon when their servers hickup on the book ads and my site takes forever to load.

Now let's extend that scenario to mashups of dozens of services, much like Amazon uses internally to create pages. How do you mesh all those Terms of Service? This is a problem with OpenSocial too. Every social graph host has different TOS and they don't all get a long. If Flickr lets me view X pictures and Facebook only lets me get X-Y profiles, then my service fails.

Having a more general and adaptive API capabilities architecture behind the scenes would seem to help shim all these different services together so you aren't always running out of thread just as you need to stitch on another button.

November 29, 1990 | Unregistered CommenterTodd Hoff

I always thought of API restrictions as loss leader for a service. By allowing you to develop apps which are useful enough for demo and causual use but not scalable to a comercial service, it stimulates a freemarket around a API in terms in innovation but then if your app's bandwith becomes more than a hobby then the API provider can command a nice royalty from your succesful service.

It is an interesting idea about applying game theory though not really prisoner's dillema. The interesting thing about a Tit-for-tat strategy is that it has a very short term memory which means that so long as your last usage was acceptable then the API provider's service in return would be acceptable.
This clearly would't work on a small granulary (lets say every API call) since even in a malitious app you will make some benign calls which will even out.
You could make a round quanum based which sounds more plausible by saying "the level of service for the next n mins will be dictated by your behavior in the past n mins".

The problem with the Prisoner's dillema analogy is that in PD the prisoner's are equal entities and must make their decision independent of each other. In the iterrated PD, the only way they can communicate is to punish for defecting in the last round.
In a client server environment you can't punish the server by flooding it for defecting last time while you cooporated.
You should be able to play the tit-for-tat against itself on both sides.

Rating systems like ebay/amazon/or bank credit rating in the UK work on a large sliding window.
Ebay's strategy has a much longer memory than tit-for-tat. (once you get a bad rating, you can't get back up past 99%).
Ebay is also based on human feedback from many different interactions with humans which cannot be represented as a game theory strategy.

A tit-for-tat approach would mean that you cannot simply wait for a time out, you have to demonstrate reform by active co-operation.

If you applied this to APIs, you can write a script which sends benign request for at least a quantum - bulding up a good profile. So you could construct a opponent for Tit-for-tat which behaved for one round while it was executing, and is then benign for another round restoring it's credibility.

>>If Flickr lets me view X pictures and Facebook only lets me get X-Y profiles, then my service fails.

I guess this might happen, but I think the limits should be understandable from the API specification and if your app really, then you need to sign SLAs with the relevant suppliers declaring your needs. I think if you are a heavy user, the providers have a right to know who you are and what you are trying to do so that :
1) they can advise that your use of the API is correct
2) Their engineers can model your use cases to improve the performance of their system or crucially ensure that they don't cripple your use cases when they introduce the next upgrade

Why should you create a human/commerical relationship with someone you depend on? Look at the Flickr small print for instance:

"Flickr services are experimental and are currently offered to outside developers on an ad hoc basis with no guarantee of uptime or availability of continued service. We reserve the right to disable access to external applications at any time." (from

November 29, 1990 | Unregistered CommenterTwm Davies

> I always thought of API restrictions as loss leader for a service.

Perhaps that's how they started, but they quickly have become the service. Twitter and Ebay, for example, do huge numbers through their APIs because millions of programmers can create faster than an individual company. Let a million flowers bloom sort of thing.

> In the iterrated PD, the only way they can communicate is to punish for defecting in the last round.

Tit-for-tat is just the gateway drug for the idea. Because of the issues like you bring up one would certainly have to do better.

> I guess this might happen, but I think the limits should be understandable from the API specification and if your app really

They are, but it's the intersection of all those limits that determine what you can do. We see a parallel with open source licensing. If licenses conflict that you can't use the conflicting parts together which now means the parts are less than the whole and there's no room for making something better together.

> Why should you create a human/commerical relationship with someone you depend on?

Because they have cool stuff that you can leverage to do more cool stuff. Isn't that why we do anything?

November 29, 1990 | Unregistered CommenterTodd Hoff

>>Why should you create a human/commerical relationship with someone you depend on?

No I meant that as more of a question of when does this stop being a technical issue and becomes more of a conventional partnership where contracts are signed.
I'm saying that anonomous mashup is all fine and well so long as the API provider can see the economic benefits, but I would think for for serious business model based on the API of another company, then it would better to declare your intrests rather than rely on an level of service algorithm.

I think you raise an interesting point though and for services which for the most part exist as API (twitter for example), there is a need to scale and defend.

One very interesting aspect about API limits is that of data migration. Consider the intera involved in closing a Flickr account and moving it to Picassa say. If you have no limits then it;s very easy for picassa to write an API specifically for migrating all your photos from a competitor's service.
When fadphotoservice3.0 arrives with lots of advertising - the incumbant will want to retain their customers not make it easy for the new boys to steal them.

November 29, 1990 | Unregistered CommenterTwm Davies

Wouldn't a more flexible quota system be enough? What if you had a certain amount of weekly "twit credits" that you could spend as you see fit? Twitter could set a certain cap to reduce spikes, but that would still be a lot more flexible than the current system.

[BTW, your link that says "through the weblinks registry" has tags in the wrong place.]

November 29, 1990 | Unregistered Commenterrubinelli

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>