Sunday, August 3, 2008

Bayesian faith

I have often maintained that people who refuse to believe the mountains of accumulated scientific knowledge in domains where they have counter-scientific but unyielding faith aren't necessarily behaving irrationally. At the risk of being quoted out of context as supporting religious dogma over science (I don't, never have - if you quote this post, please quote its entirety), let me show you what I mean by that.

One of the intellectually rich contributions of computational thinking is the notion of Bayesian belief -- the idea that a rational agent must update its beliefs about possible events in the world based on some prior information and the evidence it sees. Sounds pretty straightforward, doesn't it? Well, maybe it doesn't until things are defined more rigorously. 'Belief' that X in this case is a probability the rational agent maintains in its mind that X is true. Classic example: do I believe I should take an umbrella with me today, as I go out to work (where else!?)? Well, that depends on what I a priori believe about the weather today (the forecast was for 40% chance of rain - maybe I shouldn't) and on what I see when I look out the window (evidence: menacing clouds ride low, gusts of wind seem to indicate a thunderstorm is on the way - oh-oh, maybe I should).

How exactly does one arrive at that decision, at that degree of belief? The beautiful thing is: there's a theorem about that. The best anyone can do was written down by a British Presbyterian minister by the name of Thomas Bayes in the 18th century. Bayes' theorem says that the likelihood of something is equal to its prior probability times the evidence, normalized. The trick is: if you a priori believe something to be 100% true (or false), then no amount of evidence to the contrary will sway you. To see that for yourself, check out the awesome intuitive introduction to Bayesian reasoning by Eliezer Yudkowsky (or the intuitive and short version by Kalid at BetterExplained), then try updating a prior probability of 1 in any of the examples.

It turns out that there is much evidence that our brain operates in a fashion consistent with Bayesian updating, on many levels from immediate visual perception (I can see the letters I'm typing but the higher-level prior context provided by my brain will prevent me from spotting some typos), to common-sense decision-making like in the umbrella example. The scientific method too can be described in Bayesian terms, as we inspect new evidence and deduce how much it discriminates between conflicting theories, and how much information is contained in it.

So, if Bayesian-like processes in our brains are responsible for rationality, and if you have 100% faith in something, then mathematically, no evidence will be of consequence. You will still believe your pet theory with 100% probability. Maybe that's how unquestioned beliefs operate: they aren't necessarily functionally unquestioned (the brain machinery is constantly at work) but the resulting belief is predictably stable.

But where could those crazily strong prior beliefs come from? That's a very important question, but one for another post.