Betting on Beliefs

Proof Of Logic
Solar Panel
Published in
4 min readOct 24, 2016

--

[Epistemic status — speculative.]

It’s long been a trope of Bayesian rationalism that if you disagree with a friend, you should bet. This is a good community norm: if you bet, you’re more likely to remember that you were wrong; you’re forced to quantify the degree of your certainty; you’ll be more humble or more firm in the future based on how past bets have gone; betting is a tax on bullshit; and, betting odds create a visible aggregation of group knowledge. However, I would like to set all of those things aside and ask: is it really rational to bet for the sake of the money?

The naive expected utility calculation says yes: if you assign probability p to X and your friend assigns probability q, then both of you will think it’s profitable in expected value to make a bet at odds o between p and q. (Negotiating the odds is another matter.) Realistically, we don’t value money linearly, and furthermore there are practical reasons to avoid risk (since reducing variance in future funds makes planning easier). Still, taking these things into account yields something like Kelly betting, which always approves of putting down some money if the expected monetary value of the bet is positive. (It might be less than a cent, however.)

What makes me shy away from this way of reasoning is: betting is a zero-sum game. The number of won bets equals the number of lost bets at all times. The amount of money won equals the amount of money lost. In any bet between friends, both parties would honestly advise the other not to bet. Presumably, the argument for betting is that if you’re betting based on your beliefs, then you expect to win more than you lose on average. But this appears absurd: within a group of people betting, as much money is won as is lost. The average has to be zero. So, how can it be “rational” to expect to win more often than you lose?

Maybe you can beat the odds by only betting if you have good reason to expect that you have better information than the person you’re betting against. Again, though, even that strategy can’t pay out on average — not against people who are similarly smart enough to think of it (and it’s not that hard to think of). You have to think you have better than average reason to expect you’re on the right side of the bet. It seems to me that a community of reasonably rational agents just won’t bet with each other, if they’re only after money. We all know that in order to profit from bets on average, we’ve got to have higher standards for when to bet than each other. So, the only possible outcome is for everyone’s standards to be so high that no one ever bets!

My intuition is based on Aumann’s agreement theorem, which states that Bayesian agents with the same prior (but differing evidence) cannot agree to disagree — if they try to agree on a bet, they will update on each other’s willingness to take bets until they converge to identical beliefs. You might initially give 3:1 odds on a project being late, but a co-worker enthusiastically trying to take that bet decreases your confidence to 2:1. The co-worker is initially interested in the 2:1 odds as well, but when you start to say “sure” for the adjusted odds, your confidence changes your co-worker’s mind; you’ve converged to a mutual 2:1 estimate. According to the Aumann Agreement theorem, Bayesians who try to bet will move their beliefs toward each other until they no longer have a disagreement to bet on.

How well this applies to humans was much-discussed in the disagreement debate on Overcoming Bias. Personally, I would more often update toward the other person’s beliefs than I would take a bet.

“Wait”, the betting advocate says — “that argument assumes everyone is rational. Really, though, we know there are many people in the mix who will take bets when they don’t have good reason to think they’ve got better information than you.” Sure, that’s true. But if you’re making bets, how confident are you that you’re not one of them? We know we’re all biased. Doesn’t it seem safer to have an anti-betting policy? And anyway, that scenario still doesn’t appear to allow bets between reasonable people; bets can only happen when someone participates unreasonably. So, it would seem odd to advise people that betting is rational.

“Ah, no, that’s not quite right. The existence of unreasonable bettors casts reasonable doubt on which type of bettor I am. This means reasonable people will occasionally bet with me, because they happen to believe I’m a fool, even though that’s not the case.” Really?

It seems to me that for this argument to go through, you’d need to have privileged information that’s so unlikely, people are more likely to think you’re crazy than suspect the truth. Suppose you have such information. People are willing to bet with you now, because betting with crazy people pays off. But should you make that bet? It’s not just about knowing you’re not clinically insane. Other people see that you’re making offers for extreme bets. We’re assuming they’ve weighed the possibility that you have privileged information against the possibility that you are somehow mistaken, and come to a reasonable conclusion that you’re mistaken. This suggests that mistakes of that order of magnitude are somewhere around as common as the kind of one-in-a-million information you think you have — or perhaps much more common. How sure can you be that you’re not making one of those mistakes? Wouldn’t you do better, on average, if you had a general policy of not betting in this kind of circumstance?

I still think a habit of making bets with friends is a good one for all the reasons I mentioned before. However, I find it really hard to envision a scenario where you’re justified in taking a bet purely for the money.

--

--