Chaos and Consequentialism

Proof Of Logic
Solar Panel
Published in
7 min readApr 19, 2017

--

There is an interaction between a culture’s common-sense understanding of a subject and science. For example, although folk psychology is partly a biologically-given human ability to reason about the mental state of others, the way people reason about the mental states of others has been greatly influenced in recent times by Freudian ideas and behaviorism. Unfortunately, the popular version of scientific ideas is often quite skewed or out-of-date.

I think there are two modern ideas where the public’s intuitive understanding is especially out-of-date, and I think there is a lot of benefit to be gained by improving that understanding. These ideas are not new at all, but an understanding has still not propagated to the public very well. The two ideas are randomness and responsibility. Chaos and consequentialism. Probability and expected utility.

Randomness. Most people have an intuitive understanding of randomness which looks something like the D&D “chaotic” alignment. Something looks random to the degree it is unexpected and surprising. The gambler’s fallacy can follow from thinking as if events are actively trying to look well-mixed. States of maximal chaos are imagined to hold unexpected ordered objects (think of Douglas Adams’ infinite improbability drive), when in fact maximum-entropy states tend to be rather boring.

Responsibility. People have some strange intuitions about how responsibility and blame should work. I find that people act as if blame is conserved: if you can attribute fault to one thing, then there’s a feeling of release which makes you much less likely to look for other sources of fault. If blame does get spread out among many things or people, it seems as if it “stretches thin”, so that less rests on the shoulders of each point of blame. This does not make very much sense. If a fault has many causes, each need to be addressed equally. This view implies, in particular, that you can’t get out of your share of responsibility just by pointing out someone else’s. In the aspiring rationalist community, this is called heroic responsibility. This is about sane reasoning about the consequences of your actions. There could be a moral duty aspect, if you want to speak of such things, but it’s also just a brute fact of reality: if you act in ways which tend to improve the chances of getting what you want, you’ll tend to get what you want more often; the same cannot be said in favor of putting blame elsewhere. I’ve also heard this idea referred to as “internal locus of control”.

You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame. The kind of responsibility I’m talking about is a favor to yourself, not to other people. (I mean, it may also be a favor to other people, if you care about those people and decide to help them. But then it’s because you decided you care.)

Now, there isn’t a perfect consensus on these issues. For probability, there’s the debate between Bayesians and frequentists. I may think the Bayesian perspective is superior, and points to a specific understanding of randomness as a subjective phenomenon (so randomness and uncertainty are really the same thing). I will say things slanted from that perspective, but I think there’s something to be gained just from the uncontroversial laws of probability theory, applied to the kind of events everyone would agree we can apply them to.

Similarly, there are many versions of, and alternatives to, consequentialism. There’s the debate between causal decision theory and evidential decision theory, and there’s the question of deontology and virtue ethics. Again, although my remarks will be a little biased toward consequentialist thinking, I think what I’m pointing at is mostly common ground — though it isn’t codified by an uncontroversial set of mathematical laws the way probability theory is. The perspective I’m putting forward here can be understood through the lens of expected utility theory, but I suspect it makes about as much sense in alternative frameworks as well.

Now, I can’t just say “do probability correctly” or “decide what you want and go about trying to get it in a sane manner” and call it good. Both of these are complicated skills which take a significant amount of development. However, I think something useful I can do is try to make a list of the important things you can try to get right.

Consequentialism

  1. Notice when you’re trying to solve a problem by putting some duty/obligation on someone else. Is that solution going to work? It might, if their goals are sufficiently in line with your goals and they take the suggestion well. But often, I think some part of our brains fools us into thinking that blaming other people for problems is an actual solution to those problems.
  2. Always consider what you could be doing differently to make for better outcomes. It is sometimes the case that a car crash is “really the other person’s fault”: there is nothing you realistically want to change about your driving habits to make this sort of accident less likely. However, it is never the case that you want to determine this by determining whether there was some big glaring mistake the other person made which they should avoid in the future. Don’t obsess over what you could have done differently if you find that there was nothing, but don’t reason as if the degree to which you could have done something differently is directly opposite the degree to which they could have.
  3. Don’t assuage your regrets by setting them aside or using unrealistic thinking to reassure yourself. There is a good and healthy kind of obsessing over regrets, where you figure out what you realistically could do differently in similar situations in the future to make things go better. If you can do this while avoiding the unhealthy kind of obsessing over regrets, you turn them into a source of strength. Advice on how to do that.
  4. Think in terms of what could have happened, not just what did happen. There’s a fallacy in gaming called results-oriented thinking, in which you put too much weight on your experience (positive or negative) when you know things could have gone differently. You might end up abandoning a good strategy because of a chance bad event, or putting too much faith in a bad strategy which you could easily see you just got lucky with. Getting past this requires an attitude where you regret succeeding for the wrong reasons and pat yourself on the back for doing the right thing even when it ends up backfiring by chance. This is dangerous, because you can blind yourself to the feedback which you’re getting; it has to be combined with honest reassessment of your models.
  5. Have a model of what you want, have a model of the situation, and try to take actions which lead to what you want. (This doesn’t imply selfishness, as you may want to help other people. It also doesn’t imply rejection of authority or advice, as you may take those as strong evidence. However, it does imply that those considerations ultimately are subservient to what you think is right.) Having a model (even a mediocre model!) of what you want de-biases in several respects. The sunk-cost fallacy becomes difficult to make. The halo effect is reduced, as you are forced to evaluate the overall effect including all pros and cons (or rather, all pros and cons which fit in the scope of your model). There are several other benefits which I’ll have to try and describe in future posts. However, in order to have a very good model, you’ll also have to master the art of uncertainty.

Uncertainty

  1. Randomness is not a property of an individual event. An event can be judged as low-probability, but a random (high-entropy) process is one in which lots of events have equal (and therefore low) probability. This is why the gambler’s fallacy isn’t true, and why we see lots of clustering in random sequences: a long run of one side of a coin is as probable as an alternating sequence of heads and tails of the same length, even though the second looks better-mixed.
  2. Estimating probabilities by counting arguments. Combinatorics (aka “the art of counting”) gives a critical tool for thinking about the odds of different events. Even if you don’t ever use the explicit calculations ever again, learning how possibilities combine will help you think about probabilities clearly.
  3. Thinking in information theory. Again, even if you don’t ever use the math, understanding the concepts can give a better perspective on communication and reasoning.
  4. Accounting for base rates when forming estimates.
  5. Adjusting for selection bias/availability bias.
  6. Requiring good hypotheses to stick their necks out with predictions. Bayesians may codify this in terms of Bayes’ Law while frequentists do it with null-hypothesis testing and other statistical measures, but both agree that this is important. A hypothesis which can never be wrong is about the same as one which can never be right. (A frequentist would think of this principle as “how to distinguish patterns from randomness” while a Bayesian would think of both “pattern” and “randomness” as simply different distributions of uncertainty; this leads the frequentist to privilege the “null” hypothesis as a special default, where the Bayesian treats it as just another hypothesis which gets treated like any other.)

Awareness of the general shape of each of these is (I think) quite helpful. Of course, turning explicit awareness into a deeper intuition which shapes your reflexes regarding randomness and responsibility is more difficult. It requires noticing what intuitions are currently shaping your thinking, and stepping in to re-shape those intuitions by thinking in new ways until the new ways become habit.

I don’t think any of this is too surprising to readers here, but I think it is worth Something to arrange it in this way. The two categories correspond to epistemic rationality and instrumental rationality. By no means have I listed all the important points (or even the most important points) which go under those two headings, but I encourage you to try.

(Thanks to Philip Parker for some conversation about this post and ideas for points.)

--

--