During our last sesion on Decision theory, we were discusing on St. Peterburg paradox.
We, at least partially, agree that there is a paradox even if there is no infinite utilities. I will briefly defend that this position does not resist a simple mathematical analysis.
On the asumption that there are no infinite utilities the St. Peterburg game is perfectly acceptable:
I would bet 2utilities for getting 2utilities if the coin lands heads and 4utilities if the second time that I flip the coin it lands heads again. The game seems to be completelly fair. And so are the following games where:
The fist column represents the maximum price of the game. This would be 2utilities if the coin is flipped only once, 4 if it is flipped at most 2 times, and so on. In general 2 to the power of n where n is the number of times that the coin can as much be flipped.
The second represents the probability of each case.
The third column represents the expected utility (how many utilities should I pay to play the game).
Premium | Probability % | EU | Result |
0 | 50,0000000000 | 0 | -20 |
2 | 50,0000000000 | 1 | -18 |
4 | 25,0000000000 | 2 | -16 |
8 | 12,5000000000 | 3 | -12 |
16 | 6,2500000000 | 4 | -4 |
32 | 3,1250000000 | 5 | 12 |
64 | 1,5625000000 | 6 | 44 |
128 | 0,7812500000 | 7 | 108 |
256 | 0,3906250000 | 8 | 236 |
512 | 0,1953125000 | 9 | 492 |
1024 | 0,0976562500 | 10 | 1004 |
2048 | 0,0488281250 | 11 | 2028 |
4096 | 0,0244140625 | 12 | 4076 |
8192 | 0,0122070313 | 13 | 8172 |
16384 | 0,0061035156 | 14 | 16364 |
32768 | 0,0030517578 | 15 | 32748 |
65536 | 0,0015258789 | 16 | 65516 |
131072 | 0,0007629395 | 17 | 131052 |
262144 | 0,0003814697 | 18 | 262124 |
524288 | 0,0001907349 | 19 | 524268 |
1048576 | 0,0000953674 | 20 | 1048556 |
For the example assumme that I have a lineal utility function regarding money between 0 and 1million euro (hard to believe but assumme that that is the case) and that the utility of 100M € equals the utility of 1M for me. The function saturates at 1M. (if you are not convince, for 30€ you can earn up to 1billion €, and I think that that is enought to saturate definitely the utility function of all of us).
The fouth column shows the money I would win or lose depending on the result of the game.
If you are having doubts on whether to play the game or not is because the utility of money is not linal for you and therefore: U(1M€) is not equal to 50000*U(20€).
In this case you would pay less money to play the game, but this is completely compatible with decision theory. Think of something wich utility is lineal in this range and you accpet the game (psichological reasons to avoid betting are out of the question) as you clearly see when the game is propossed to win just 4€.
The paradox is expressed in terms of utilities so have to find something which utility is lineal between 0 and 1M.
The real problem arises just in case we consider infinite utilities (no matter whether they are lineal or not). Imagine that more money has always a higher utility, so the utility function of money is a monotonically strictly increasing function in any interval. Then there is a problem, because at the limit the price is infinite...
The expected utility of any lottery involving an infinite price cost infinite no matter what the probability is. This two lotteries has the same cost (infinite):
1,00% | 99,00% |
0 | infinite |
|
|
99,99% | 0,01% |
0 | infinite |
You should prefer the second lottery to all that you have and that is obviously unacceptable. The solution: there are not infinite utilities.
The problem with lower probabilities is just that we are not able to find any utility that satisfies that lottery and therefore it is difficult to find an interpretation of paying 250 utilities to play this lottery.
99,99%->0
0,01%->25000000
But that says absolutely nothing against the decision theory.
The St. Petersburg game is only a problematic if we consider infinite utilities.
8 comments:
Sebas,
thanks for laying out this argument. After our last session, I also thought about the finite SP paradox some more, and looked at a table similar to the one you produced (from the SEP article, in my case). I am not so sure as I was that there is a serious problem here, in the finite case. But still, if you pay 20€ to play the game, you will lose money if the coin does not land Heads at least 5 times in a row. And if a large - but not very large - number of people play, e.g. 50, then most of the 50 will lose money, and of those that do not, most will win only 12€ or 44€ . . . in a group of 50 players, one would not expect to see *anyone* earning more than 4000 or 8000€. Only if millions of people played, would the net utility of the players be likely to be 0 or greater than zero, I think (for the game costing 20€). Even then, most of the million would be losing money . . .
The question is, is it rational or irrational to count the super-low probability events (that have the high payoffs) when deciding whether to play or not?
I think I'd play for 15 or 20€, but that's because low number euros are not linear (rationally or not, I can lose 20 and shrug it off as a nothing-loss) and because the game would be fun to play (even if I lost on the first toss). But I wouldn't play a game costing 1.000€ to enter, even though I could win hundreds of billions.
The thing is, if you think it is ideally rational to play St. Petersburg for 20$, then you need to find it ideally rational also after playing once, or ten times. As long as the amount of money you loose doesn't really mess with your risk aversion, that is, your utility function for money.
I have made a little program that plays St. Petersburg games any number of times, and calculates the amount of money that you win or loose, if you pay 20$ for each game. Here are the results. Bear in mind that these are actual results of random tosses (well, sort of: rather they are results of a pseudo-random generator). So other games would give other results than these:
If I get you to play ten times with me (remember, 20$ each game) you loose 188$
Another 10 games gave me 190$. Another ten, 160$. I've played many other sets of 10 games and the least amount I've won is 56$.
If I get you to play 100 games, I'll win amounts such as: 1732$, 1598$. 1088$, 1664$.
So, Carl, Sebas, anytime :)
Or, Manolo, you might lose a million euros to me in one of those games!
However, as I mentioned, euros are not linearly related to utility for me, and I am certainly imperfectly rational in the sense that I would find fun & diversion in playing the game *once*, but probably not more than that if I lost that one . . .
Or maybe I would not be so irrational to decline subsequent games . . . that's what we're trying to figure out! :)
This is interesting. You said:
"euros are not linearly related to utility for me, and I am certainly imperfectly rational in the sense that I would find fun & diversion in playing the game *once*, but probably not more than that if I lost that one."
I don't think that has anything to do with your utility function for money, but, rather, with the fact that the very act of playing the game has some utility for you. This is the kind of utility-messing-with-lotteries that brought us the Allais paradox in the first place.
After discussing this issue with Sebas and Óscar Cabaco, I've been trying to think how could we get rid of this problems of non-ideality of utilities in discussing St. Petersburg. What do you guys think about this:
* You are not playing for money, but for plastic chips, and you know that another guy besides you is also playing St. P. against the same "bank". You have the right to play any number of games against the bank between 1 and, say, 10, and you start with, say, 300 chips.
You very much want to beat that other guy -maybe something really crucial depends on that- and you have no way to know, till the very end, how is he doing with his chips and his bids.
This set-up has three desirable properties:
1- The plastic-chip bit gets rid of non-ideality in utility functions: given that the one with more chips wins, n+1 chips is better than n chips for any n.
2- In the absence of any information about the risk aversion of the other guy, I think you are compelled to make whatever choices you think are more rational, period.
3- No Allais-like interference of our desire to play the game because its fun or the pájaro-en-mano effect. There are only two possible outcomes: you win or you loose, and it is very important that you win.
The game is interesting, but I cannot see how it is related to the paradox.
If I play the game my goal is not to maximize any utility but to get more that my opponent. To win I have to consider how much money can he makes, so it can be considered as a problem of decision under ignorance.
But, taking into account that the expected value of the game is 300, it is rational to assume that this is what my hypothetical opponent will get and try to get more than this (Given that he has to play at least once, I would consider that he wins a simple round and I would try to get 302€, maybe this is not rational, but in a quickly analysis this would be my strategy).
If I consider that my opponent finish the game with 300€, any euro over this quantity has no utility at all for me, that's why I do not see any relation to St. Petersburg paradox.
Regarding Carl's point about not being sure of accepting lotteries with a very low probability of winning, I think that the reason for that is a misunderstanding of utilities (you cannot imagine in terms of utilities any real thing that justify to accept the lottery).
Precisely utilities explain that any of us will think diferently of the following lotteries
Pay 1 cent: P(0)=99,995% and P (200€)=0,005
Pay 1 cent: P(0)=90% and P(1€)=10%
Pay 1 cent: P(0)=50% and P(2cents)=50%
Pay 10€:P(0)=99,995% and P(200000€)=0,005
Pay 10€: P(0)=90% and P(100€)=10%
Pay 10€: P(0)=50% and P(20€)=50%
Pay 1000€:P(0)=99,995% and P(20000000€)=0,005
Pay 1000€: P(0)=90% and P(10000€)=10%
Pay 1000€: P(0)=50% and P(2000€)=50%
Another example:
Imagine that you are playing poker, you have bad cards in the last betting round of one hand. There are 64€ in the table and your opponent bet 8€, you believe that your possibilities to win are just 10%. Decision theory tells you that you should see (accept) the bet, you will bet 8€ to win 80. But following this strategy you will be out of the poker championship very quickly and I think that this is what is behind Manolo's intuition to deny to play. The reason for assuming that not seeing the bet is rational and nevertheless against decision theory is that we are considering that 1coin= 1 utility. But this is obviously false. Other factor are in play, and they are part of the utility! (how many coins you have -maybe you do not mind to lose 8 coins- how many coins your opponent has -maybe you can risk to eliminate this adversary- etc)The objective of the game is to get the coins of the other players (bounded utility) and the number of strategies that you can use depends on the number of coins you have and the number of coins your oponent has, etc. Furthermore once you lose all your coins you are out.
A last remark:
Utilities are ideal entities. Ex hypothesis they lack any variance.
This is getting a bit off-topic, but anyway:
The expected utility of the game is not 300 euros (there are no euros involved, just plastic chips) but whatever is the utility of winning the game: going to Heaven, maybe, or whatever.
Both you and your opponent start off with 300 chips, and you have to stretch those into as many chips as possible to make sure that you end up having more chips than him. You can play St. Petersburg, and each round costs those 20 chips we were talking about in the thread. So, are you ready to pay those 20 chips? For how many rounds? Given that you want to end up with as many chips as possible, I would say you are obliged to do whatever is rational to mazimize your chips.
But than again, maybe not. I'm starting to lose grip :)
If in Manolo's chips game we are talking about playing finite St. P's games, then while the expected payoff is 20 chips, it is hard to see why one should play at all: one does not increase the expected chips at all, but does take on a risk of having fewer. It surely is rational to just keep the 300 you start with, and hope the other guy plays the game. :-;
Now let the St. P game be *unending*. Your expected payoff is now infinite chips. But still you need pay only 20 chips to enter. Should you play, expecting to beat your opponent? With 300 chips you could play 15 times! Somehow, I feel it is still not obviously right to play rather than stay "pat" (i.e. not bet, in american gambling parlance).
Carl
Yes, I was not clear about that: imagine the price to pay is below the expected utility of the finite St. P. in question: so, 20 chips for a 30-round St. P. Can't beat that deal :)
I still think you should not accept the deal, if you want to win. But decision theory compels us to accept.
Anyway, after this conversation I realise it is notoriously difficult to motivate a good finite St. P. paradox. Thanks Carl and Sebas!
Post a Comment