[<< | Prev | Index | Next | >>]

Tuesday, October 05, 2004

Why We Rationalize



It occurred to me this morning, rather out of the blue, why people rationalize as they do. And I don't mean "because it makes them feel better" or anything like that. This is something more fundamental which explains even extreme cases such as those discussed in Phantoms in the Brain, and which is tied intimately to the hazards of reading.

My tenet, quite simply on the surface, is that rationalization is what we do, and further that under historic circumstances this was optimal. Of course this has been said many times before--that we are fundamentally rationalizing, rather than rational, beings--but now I have an idea just how absolute and deep this principle runs, why, and what it means.

I have long argued, and I think this is becoming more broadly entertained with time, that intelligence, or particularly perception and understanding (as opposed to, say, planning and decision making which come later), is akin to having a model of the perceptual space. That is, if we have a black box hooked up to a camera, that box can build up an understanding of the world by learning to model the input signals it receives through the video cable, just as we learn about the world through our senses. Here "model" means mimic (or predict--mimic over time). That is, internally the black box tries to construct a heterarchy of concepts which are interrelated in such a way as to imply or predict similar perceptual patterns to the ones which are actually being observed. Occam's Razor, in effect, further implies that if an effort is made to keep such a heterarchy as simple as possible, it is likely to roughly reflect an analogous heterarchy of actual relationships in "the real world", or wherever the percepts are coming from. I.e., I claim, and I think the brain is built upon this assumption, that in learning to efficiently model the perceptual world, we also acquire a decent model of the structure behind the perceptual world (i.e., the true nature of the universe, as it were).

A consequence of this process is that, at any given time, our understanding of what we see is based entirely on the model we have built to date. More specifically, the process works like this: Raw perceptual input comes in (through our senses, both external and introspective), and we compare it against our current model with the question "by what internal chain of concepts would we have imagined such a thing?" I.e., our understanding of the world is entirely based upon the way in which our current model of the world can explain what we see. And this is not just some additional clue we use or something--this is the very foundation of our understanding.

The selection of this "best fit" chain of concepts is based upon the junction between what are called top-down and bottom-up influences. Bottom-up begins with perception, and top-down begins with expectation (our current model), each in effect implying a sort of cone of possibilities, like two wide-angle search lights pointed toward each other where the solution lies in finding the shortest path through the model's labyrinth in between without straying too far into the dark. The resulting path, then, goes from the exact point of abstract expectation to the exact point of concrete perception, and this path constitutes our best "explanation" of the perception.

I don't want to get too much into how we update our model (learning) as it's not too germane here, but suffice it to say that with time we move our expectation (top-down) spotlight higher and higher, and we keep its beam as narrow as we can while still encompassing all that we have witnessed to date (everywhere on the ground, if you will, that the perceptual spotlight has ever been). More accurately, with time our expectation becomes ever more specific and refined (which is done efficiently via becoming more abstract), thus narrowing in ever more tightly around the actual limits of reality as witnessed thus far.

Now normally everything we witness is true--that is, it came from the real world--and as such falls rigorously within the relatively tight statistical distribution of real things. Furthermore, if all is working well, our model should be fairly accurate--which is to say it is reasonably narrow without precluding even remotely likely events. And given these two constraints, the process works and matures well as described above.

The question is, what happens when one day we meet something which our model precludes? This could happen for a number of reasons, all either exceptional or relatively modern: Phantoms in the Brain discusses people who have essentially had a piece of their model carved out by stroke or injury. More subtly, simply being raised in a controlled or limited context for a sufficiency of time could lead to over-specialization of one's model. And more recently, with the evolution of language it is now possible to grow our model from second-hand observations, or second-hand imaginations, both of which I claim our brains still treat essentially the same as first-hand observations, which may or may not, after errors of both source and communication, correlate with reality at all. Then there is emotional pruning, where an entire piece of one's model may be inhibited out of fear, ego preservation, or the like. Finally, of course, there's always the fact that one's brain simply isn't perfect and can make model-building errors.

Note I asked about things our model precludes rather than things our model does not predict well. The foundation of the whole approach, in general terms, begins with a model which predicts everything with equal likelihood and precludes nothing. Essentially the top-down spotlight starts out on the ground with its beam set 180 degrees wide to cover everything, and with time it builds refined expectations from experience. But at a given level of naivety, it always errs on the side of accepting too much--think of the breadth of imagination and conceptual acceptance of a child vs. an adult. It has to work this way because to preclude the actual, as I'm about to explain, breaks the whole system.

Recall I said the best-fit path is chosen based upon the junction of top-down and bottom-up influences. Ideally this junction is performed literally as a product of probabilities, such that the odds of a given path being the right one (the "truth" as far as our algorithm is concerned) is proportional to the odds that this path would imply the (bottom-up) observations in question multiplied by the (top-down) expected odds of this path. The nature of a mathematical product is such that if either one is zero (or very small), the result is zero (or very small). In effect, the most pessimistic of the two wins when it comes to ruling out hypotheses, making way for other competing hypotheses that both can agree on. So, for instance, if your eyes tell you you are standing face to face with your grandfather, but your expectation tells you this is impossible as he died years ago, the expectation dominates and the next likely hypothesis bubbles to the surface--leaving you with the perception that you are standing before someone who looks very much like him but isn't actually him. Conversely, if you walk back into the room you were in just a moment before where you know your grandfather is sitting alone, and you find instead someone else in his place, this time your eyes are the cynic, so even though you fully expected to see your grandfather there, you assume instead that somehow he left and someone took his place.

This process is the foundation of perception, fully automated at a subconscious level, such that the first we are aware of it is when the result hits us as an "observation". I put that in quotes to emphasize that unlike a raw, unfiltered percepts (such as, say, a single dot of light on our retina), what we "observe" is in fact a synthesis of what we perceive and what we expect--and not just a mild sort of swaying, but a hard product that gives full veto power to both.

The consequence of this is that when we meet something which our model precludes, the truth is simply passed up and is replaced by whatever fits best from what's left. And since this is essentially normal operation, it is not flagged as an error, even though under certain circumstances some quite fantastic hypotheses can be elevated to best-fit, and consequently "obvious truth", by our perceptual system. In effect, when the bottom-up and top-down systems fail to meet, ruling out each other's likely paths, the unlikely paths become the likely ones. This sort of spontaneous recalibration is done constantly and so cannot be flagged as erroneous--it is what allows us to properly override either perception or expectation when one is in (non-over-restrictive) error, and it is what allows us to, for instance, see something in just a glimpse. But when, for whatever reasons, our model contains a gaping hole around some aspect of truth, the consequences can be quite striking.

Phantoms in the Brain discusses a woman who's mental model had been relieved of the left half of the world. While quite rational outside the domain of this defect (including demonstrating a seemingly normal understanding of the workings of a mirror...), when presented with an object on her left side visible to the right through a mirror, she attempted to grab for it through the mirror, and even consciously explained that in this particular case, the object was in the mirror. I.e., all reasonably correct interpretations having been lost by a mismatch between her top-down model and her bottom-up perceptions (since the object couldn't possibly be located in a half of the world that doesn't exist in her model), whatever was next in line, however unlikely it might have seemed against those missing competitors, triumphed as the perceptual truth.

I should somewhat soften the distinction between preclusion and failure to predict: a hypothesis needn't be diminished in probability from its a priori (ground-zero) state in order to be effectively precluded by the model--rather, it can simply lose enough ground to competing hypotheses by staying stationary while they advance. In effect it is the aggregate of the competing hypotheses that sets the bar--defining how narrow and specific the model is--which means in time all unsupported a priori hypotheses become effectively precluded by the model.

I heard tell of a tribe who lived in the thick jungle where they were unlikely to see a clearing of more than twenty feet or so their entire lives. One day, a curious anthropologist took one of them on a trip out of the jungle to see what they would think of the open planes. The tribesman, upon seeing a herd of cattle in the far off distance, pointed with some confusion and said "ants?".

True or merely illustrative, this is an example of a model becoming over-restrictive with time due simply to limited context. The same is presumably true on a more abstract level of people raised in one culture or community or exposed only to a restricted set of ideas. After a sufficiency of time, it is easily beyond that they simply do not wish to entertain other ideas or understand new things, but rather that they literally cannot perceive those things at all--the foreign ideas become like those cattle on the planes, which can only be seen as something entirely different from what they actually are.

Probably the most common source of blind spots is emotional blocking, where often whole classes of possible truths are simply snipped from the model in order to avert undesirable (perhaps even unsurvivable) perceptions. The most obvious of these is fault/responsibility assignment, where for many people the prospect of being personally at fault is, for one reason or another, simply untenable. With this path blocked, the more wholly at fault they actually are in some particular case, the more thoroughly their expectation model will prune away all reasonable interpretations, and the more far out their fall-back hypotheses will be, despite striking them as obvious truths. This is the pattern in general, that the more dead-center someone's blind spot is hit, the more preposterous on average their resulting perceptual interpretation can be.

Note these are errors of perception, not of judgment; nor is it irrationality at any consciously accessible level. It is not about being stubborn, but rather actually seeing a different world. Consequently, it is not something that can be overcome by being more rational in any direct sense, since it is the inputs to the rational process which are flawed.

Rationalization--the act of making something up based on what you already assume to be true in order to explain observations--is the very manner in which we work. Thus two obvious paths to staying in sync with reality are: first, to maintain as accurate a model as possible, and second, to question at the slightest provocation even the perceptually obvious. The latter especially is quite difficult since the sense of obviousness or truth is the very signal that tells our brain it needn't investigate further. Without it, we would be immediately and forever lost in the recursive furrows of doubt, and with it, we are subject to repeatedly skipping right over the completely fallacious. A heavy dose of meta cognition is warranted.

I am reminded of the basics of Zen meditation, who's emphasis is not on rationality, per-se, but on unfettered observation--on learning to give your bottom-up perception precedence over your top-down expectation. Obviously perception is impossible without applying your model at some level--since it is the application of the model against raw percepts which results in perception. But the act of focus can shine the spotlight from further down the heterarchy than it might by default, bypassing the more abstract layers of the model, and creating a more unbiased, child-like top-down expectation. On one hand, from here you will see less than if you looked down from further above, where more experience and abstraction can come into play. But on the other hand, if your model happened to be ruling something out, you may see something that was invisible to you before. To look and truly see something sometimes simply requires changing where you look from. (And every time you see something that was invisible before, you update your model and increase the chances of seeing it from higher up in the future. One of many ways to help maintain an accurate model...)

As an old friend of mine used to say, whenever she caught herself repeatedly missing something obvious, "I'll see it when I believe it".

[<< | Prev | Index | Next | >>]


Simon Funk / simonfunk@gmail.com