Communication Protocol

Proof Of Logic
Solar Panel
Published in
6 min readSep 28, 2016

--

Information cascades and availability cascades are a set of mechanisms by which mass belief shifts (or apparent belief shifts) can occur in a winner-takes-all manner. The subject is complex, and I will not attempt to summarize it here (although I’d like to discuss it further in later posts). The very basic idea is that a group of people can start repeating each other’s beliefs, take each other’s belief as further evidence, repeat the belief more strongly, and quickly converge to a strong self-reinforcing group belief. How can we avoid this problem?

Belief Propagation

Fortunately for us, communication protocol which minimize the risk of harmful information cascades have been heavily studied in Bayesian networks. A Bayesian network is a set of variables connected by conditional probability distributions. These are used for statistical inference: you add data to the network, and ask the network to tell you new probability distributions for all the variables. In order to make inference efficient, computer scientists wanted the variables to “talk to each other”: rather than using all the evidence that’s been added to the network, each variable can only individually “hear” the “neighbor” variables in the network. A variable listens to all its neighbors, forms a new belief about its own probability distribution, and then tells that new belief to its neighbors. We let the whole network talk to itself until all the information has propagated around the network.

This algorithm, if poorly designed, would lead to the same problem as we saw in information cascades. If X is correlated with Y, and X starts out slightly leaning in one direction, Y could hear this and slightly update itself in the same direction. X hears Y’s new belief, and updates more in the same direction. Y hears X’s belief has gone further, and nudges itself a bit further also. In the end, X and Y could become very confident based on only a little evidence. This problem is called double-counting of evidence.

It turns out there are many solutions to this problem. The oldest and simplest to understand is called belief propagation. In belief propagation, we still make the simplifying assumption that our friends are independent sources of information. However, we make sure to not create direct feedback loops with our friends, by removing the influence they’ve had on our belief when talking to them. X tells Y its current belief, but dividing out any influence from Y. Similarly, Y tells X its current belief dividing out influence from X. (When I say “dividing out”, I am literally referring to dividing belief functions by each other; but since we are not going into mathematical details here, it’s best to think of it more loosely.)

Imagine Sal and Alex are talking about a controversial large cardinal axiom that’s been in the news lately. They’ve been friends for a while, and they always talk about the latest results in set theory. Sal asks Alex: “So, old buddy old pal, do you think it’s true?”

According to the belief propagation algorithm, when Alex responds to Sal, Alex should attempt to factor out the influence that Sal has had on Alex’s current beliefs. Suppose Sal has been talking for the past 15 minutes about the raw beauty of the new axiom. Alex should not take this into account when answering Sal. Otherwise, Alex runs the risk of causing Sal to double-count evidence: if Alex nods vigorously due to the raw beauty of the new axiom, Sal may become overconfident when no new evidence has been put on the table. Instead, Alex would seek to communicate other information, not already mentioned by Sal.

Of course, this only works if everyone knows that belief propagation is the communication protocol currently in play. If Sal was expecting Alex to be totally won over by the argument and instead receives that relatively cold answer, Sal may update in the opposite direction. In the real world, Alex should make the intention clear by saying “Well, before you talked to me I was thinking…”, “If you hadn’t won me over, I would have said…” or similar phrases.

There is an analogous problem dealing with preference falsification. Preference falsification is a big topic, but the core idea is that people’s stated preferences are edited based on the social context. People may not differentiate between personal preference and what they think the group should do based on everyone’s preferences; they may even purposefully obscure the difference due to social incentives. This warps the group consensus in much the same way as feeding back beliefs does. The effect is reduced if everyone can be clear, tell-culture style, about what they would want if they hadn’t heard the other’s preferences yet.

Belief propagation happened to be the first solution we tried. In a large interconnected network, belief propagation is not guaranteed to do the right thing, but does well surprisingly often. Many other algorithms have been proposed since then, such as tree-reweighted belief propagation. Unfortunately, these largely seem too difficult to use as a communication protocol for humans. Are there any other practical communications protocol can we apply to do even better?

Attaching Arguments

Belief propagation does not fully avoid double-counting of evidence, but tends to double-count evidence equally. This tends to a certain kind of overconfident belief structure.

If all of your friends say something, a belief-prop node will treat this as independent evidence and become overly confident in it. If everyone follows this policy, a group can lock in to overconfident beliefs despite avoiding double-counting in pairwise relationships.

We probably can’t avoid this problem fully, but how do we mitigate it?

We could require beliefs to be backed by arguments, so that if our twelve friends give the same argument for their belief, we know that it’s not independent evidence; there’s only one argument’s worth of evidence. We might trust our friends beliefs, but in some sense we can’t fully accept it if we’re unable to go through the argument ourselves. This style of thinking will be familiar to mathematicians, who tend to feel they don’t really know something until they know the proof.

This is helpful, but we can’t just ignore everything with insufficient citation (or track down citations indefinitely). Realistically, we don’t always find out the reasons our friends believe what they believe. Instead, we attach a little [citation needed] sign to it in our heads if the information didn’t come with a source. (When we forget to attach the sign, sometimes bad things happen.)

This might be counter-intuitive to those who have internalized the idea that beliefs should be contagious:

Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious — that you believe for private reasons which are not transmissible — is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.

If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and stop believing.

This is also related to Aumann-style can’t-agree-to-disagree arguments.

However, all of this could be modelled formally. Attaching an argument (or a citation) to a belief reduces our uncertainty about which beliefs offer independent evidence, allowing us to integrate different information sources together with higher confidence.

The Structure of Your Uncertainty

We have to be really careful that the arguments we attach are not rationalizations. An argument written after the conclusion has been decided does not provide any additional evidence. You’ve got to attach your true reasons for forming the belief! Otherwise it’s just noise.

(Actually, this is not quite true — let’s take a small tangent to examine the claim. If I know that you’re rationalising a belief, cherry-picking arguments in its favor, then I will still be convinced if you find very strong arguments such as a mathematical proof. The thing is, I can also be justified in updating against what you are arguing for; if you can only find relatively weak arguments, that is evidence that stronger arguments don’t exist. Note, however, that this can also be a failure mode.)

The main point I want to drive home is that good communication strives to convey the exact structure of its uncertainty, in as much detail as is convenient given other constraints. There’s a little leverage to be had by people being more aware of these things on the receiving end, trying to infer how much evidence is in a communication, trying to figure out how to integrate beliefs coming from different sources. There’s a lot more to be gained from the communicator end, being proactive in telling the audience how to update on the information; being careful about stating the amount and type of evidence being conveyed.

--

--