Game theory, contracts, altruism

In my story Capitalism Sat, “Mathematics” says a few things about game theory that I've worked on myself. I'll discuss them here. Knowing some game theory helps, but you might be able to understand without any prior background.

The classic paradox from game theory is the Prisoner's Dilemma, or the more general tragedy of the commons – a situation where players can either cooperate or betray each other, and benefit from betraying, but are better off if everyone cooperates instead of everyone betraying. There are a lot of attempts to “solve” the Prisoner's Dilemma – that is, to find a reason why purely self-interested players should cooperate. Superrationality is one of them, but it only works for a limited set of situations. A more effective solution would be to allow contracts.

Contracts

If there is an independent contract enforcer, then the players will cooperate. Purely self-interested players can't make contracts without an independent enforcer, because they will (by the definition of their self-interest) break the contracts. But a self-interested player can enter a contract if it will benefit them, and since the enforcer will penalize them if they break it, it's in their self-interest to keep the contract. Formally, just as a Nash equilibrium is a set of choices where no individual player can gain by changing their choice, I define a contract equilibrium as a set of choices where no group of players can all gain by collectively changing their choices.1 With a finite collection of choices, there is always at least one contract equilibrium.

The contract equilibria have desirable results for the Prisoner's Dilemma, the centipede game, the Stag hunt, and a lot of other simple games. Contract equilibria have one desirable attribute: if I define strictly worse to mean “worse for every player”, then they will never produce a result that's strictly worse than another possible result (because then all the players would form a contract to get the other result instead). Nash equilibria, on the other hand, can produce results that are strictly worse than contract equilibria.

“Strictly worse” isn't a very good way to judge, of course. What about a situation where one player benefits a little, but all other players are much worse off? To judge those situations, we need a social welfare function – in short, if you represent each person's happiness as a number, a social welfare function is a way to get a single number for the whole society2. There are lots of different proposed social welfare functions, and I don't really want to say that one of them is better than the others. Here's a few examples:

  • minimum: “A society is only worth as much as it gives to its most disadvantaged member”
  • sum or average3: “Any person being 1 point happier is as good as any other”
  • a weighted average with the worse-off having more weight (a generalized compromise between the above positions)

Fortunately, I don't need to pick a specific function. I can prove counterintituive results with any social welfare function that obeys a few simple conditions:

  • Impartiality: An arbitrary reordering of the people must produce the same result.
  • Weak Pareto optimality: If every individual person is better off, society is better off.

There are a lot of other reasonable conditions you could add, but as it turns out, this is all I need. Now I can demonstrate a game where the contract equilbrium is worse than the Nash equilibrium.

The Two Exploiters scenario: There are five players. The first two are in a Prisoner's Dilemma situation; if they both cooperate, they both score 3, if they both betray, they both score 2, and if only one betrays, that player score 4 and the other scores 1. The rest of the players have no choices and they score equal to their player number. But the first two players' option is actually to cooperate in exploiting the rest, so if the first two cooperate, the rest score their player number minus three instead. In the Nash equilibrium, they betray each other, and the scores are as follows:

  1. 2
  2. 2
  3. 3
  4. 4
  5. 5

In the contract equilibrium, they cooperate, and the scores are:

  1. 3
  2. 3
  3. 0
  4. 1
  5. 2

By Impartiality, I can swap the conspirators to the end and get the same social welfare:

  1. 0
  2. 1
  3. 2
  4. 3
  5. 3

Each entry is lower than the corresponding one in the Nash equilbrium, so by weak Pareto optimality, the social welfare of the contract equilibrium is lower.

The non-aggression principle

Cooperating to exploit everyone doesn't sound nice. We can solve that with the other concept I mentioned in Capitalism Sat: the “non-aggression principle”, which says that regardless of your self-interest or contracts, you must not take certain actions – specifically, you must not initiate violence against others. If we ban the first two players from conspiring, everything is better again. Of course, it's not clear how these numbers correspond to real life; maybe players 1 and 2 work together to violently extort from everybody, which is clearly aggression, but maybe they're rival business owners conspiring to raise prices (which is illegal in at least the United States, but is hard to describe as initiating violence). Once again, I don't have to decide, because I can prove that no possible non-aggression principle leads to desirable results in all cases. The key weakness is that the principle is required to ban the same actions for every player; if two players have the same set of choices, it can't regulate their behavior differently.

I'll start with one that shows it without contracts (which is simpler). This scenario is a cross between the tragedy of the commons and the Volunteer's dilemma, so I'll call it the Tragedy of the Volunteers scenario. There are 102 players, and each player chooses whether to “participate” or “ignore”, and scores as follows: +1 if ze participated, +100 if any player participated, -1 for each other player who participated. (Imagine it's a task that benefits everyone and is fun to do, but gets messed up if too many people try to help at the same time.) Since each player benefits from participating, the Nash equilibrium has them all participating, scoring a total of zero. Adding a non-aggression principle cannot do anything except ban one of the options for all players... and forcing them all to chose “ignore” also results in a score of zero for all players. The best result is for exactly one player to participate and score 101 while everyone else scores 99. Contracts almost achieve this; they might end up with two people participating for two 100s and a hundred 98s, which I'd generally say is worse, but isn't necessarily worse by just Impartiality and weak Pareto optimality.

Now I'll break it with contracts, too. Just like in the Two Exploiters scenario, I have to make two different categories of players.4 This one is based on a task that requires a skill and is good for all, but messes up if more than one person tries to do it at the same time.5 There are 2 skilled players, who choose “participate” or “ignore” as above, and 2 unskilled players, who choose nothing6. All players benefit on the basis of the work: if one skilled player participates but the other doesn't, everyone gets +3. The skilled players also enjoy participating and get +4 for participating; the unskilled players also have a base score of 2.

With or without contracts, the skilled players act in their own (individual or collective) self-interest by all participating, resulting in a payoff of 4 for themselves and nothing extra for everyone else, which gives the distribution [4,4,2,2]. This compares unfavorably to the distribution when exactly one participates: [7,3,5,5] (reordered, [7,5,5,3]). And the non-aggression principle can't change the Nash equilibrium except by banning participation, leading to the even worse [2,2,0,0].

None of these clever extra rules can avoid the basic fact that, if we are willing to harm others for our own self-interest, we will probably end up doing so. Instead, we should act for the benefit of both ourselves and others. We should learn to think in cooperative ways, rather than competitive ways, and teach others to do so as well. By giving up just one point, one of the skilled players earns three points for everyone else. There is a world where almost anyone would do that, and I believe we can reach that world.

Rational altruism

But wait!

I would be doing my own craft a disservice if I didn't turn its glare upon the idea of altruism as well. I define a rational altruist as a person who acts to maximize a specific social welfare function, just as the players in a Nash equilibrium act to maximize their own score. I will show that every counterintuitive behavior of game theory also occurs between rational altruists who use different social welfare functions!

A simple example, with two players. Player A uses the minimum function. Player B uses the average function. They can either do nothing, or take one of two actions: Action A improves the worst-off member by 1 point, but harms everyone else by 2 points. Action B harms the worst-off member by 2 points, but improves everyone else by 1 point. Player A takes action A to improve the minimum function by 1, and player B takes action B to improve the average function by (almost) 1. As a result, literally everyone in the society is worse off by one point, despite the fact that the actions were taken by purely rational altruists!

There's a few ways to convert an arbitrary game-theoretic game to a game between rational altruists. (The above example is a conversion of the Prisoner's Dilemma.) Unfortunately, the best ways to do it require a lot more math, so I won't show them here, and the simple ways use social welfare functions that aren't very realistic. The simplest way that I know is this: The altruist players use the functions “minimum”, then “as good as the second-worst-off person”, then “as good as the third-worst-off person”, and so forth. We'll make the players of the original game correspond to those positions in the ordering: Their score after the altruists' decisions is the score from the original game, plus a large constant factor so that their scores don't change the ordering. (e.g. the first one is score+0, the second is score+1000, the third score+2000, and so forth, if no two scores possible in the original game differed by as much as 1000.)

And as if all the problems from game theory weren't bad enough, enforcing contracts between altruists is harder too. How many people would agree to intentionally harm society as a whole in order to penalize someone for breaking a contract?

Oh crap, what do we do?

I really have no idea!

But don't despair yet. There's some good news. First, although the ethics of “benefit yourself by harming others” are very interesting to talk about, I like to think of those situations as the rare ones. Most of our interactions rely on shared physical and social infrastructure, so an action that harms another person also harms yourself, and an action that helps another person also helps yourself. Obviously, stealing physical objects from total strangers is an exception, but if you steal from a friend, the theft will stress them out (or worse) and that stress will harm everyone in their social circle – and if you steal from someone in the same geographical area, it's likely that they were a friend of a friend. Abusers divide and damage their own communities, even if they don't understand how. Selfish motivations are aligned with each other more often than not.

Second, altruistic motivations are aligned even more often. Helping out the most disadvantaged members of society doesn't hurt the rest of society, it helps the rest of society. People who can't get medical treatment are more likely to spread diseases that can affect anyone. People are more productive if they have a social safety net than if they constantly have to worry about their physical safety, or where to get their next meal. And people who find that there's a community ready to help them, are more likely to want to help give back to that community once they have the ability to do so.

It is possible to make this world into a good world. It's just a matter of whether we will succeed.

– Eli

Footnotes:
  1. The idea of a contract equilibrium seems like such an obvious extension of game theory that I assume plenty of people have thought of it already, so there's probably already a phrase to describe it so that I don't have to invent my own. However, I couldn't find any writing about game theory that describes it. back
  2. Technically, these “numbers” don't have to be numbers; they can be any totally ordered set. And perhaps it's better if they are only an ordering; I find it much easier to say “I prefer X rather than Y” than to assign a number to how much I prefer it. back
  3. Sum and average are equivalent if the number of people is fixed. I'm not considering variable numbers of people in this post; that leads to annoying questions like “If we use the average function, does killing unhappy people improve society?” and “If we use the sum function, does spawning 1000 unhappy babies who will soon die improve society?”. back
  4. Proof sketch: If the players are identical, then Impartiality is irrelevant, and weak Pareto optimality is just the “strictly worse” condition. back
  5. There are lots of examples of such tasks in real life. For instance, organizing and leading a group of workers: lots of people like leading, and one person taking charge can make the work more organized, but it's less efficient if two people try to do it at the same time. back
  6. Alternatively, all the players have the same skills, but the first two enjoy doing the task and the second two don't enjoy it. back
Approximate readability: 11.69 (10906 characters, 2359 words, 104 sentences, 4.62 characters per word, 22.68 words per sentence)