[<< | Prev | Index | Next | >>]
Friday, March 03, 2006
The Rest are Androids
Those who believe without reason cannot be convinced by reason. ---James Randi Corollary: Those who believe with circular reasoning cannot be convinced by grounded reasoning. ---Simon Funk
More on the thought and language connection...
Some of the more explicit followers of the language-is-thought camp are Objectivists, owing largely to the alpha draft of Ayn Rand's theory of epistemology. (The irony does not escape me.) And while they are hardly the best example of this kind of thinking, they are the closest to home since most of the people I know are at least Objectivist sympathizers whereas, for instance, I know very few proto Christians or others likely to think-by-word. So, I am going to use them as my example class even though there are larger and more representative ones out there.
My realization, looking back over years of debates with these folks--including and especially one particular now-prominent Objectivist author who I engaged in debate regularly for a number of years--is that they are entirely correct. They do think and reason primarily with words. Where they were incorrect (and where I was incorrect in thinking the converse) is in assuming that I (and everybody) must think that way too.
This is a profound realization for me because suddenly I grok every perplexing debate I've ever had with these people. It now makes perfect sense why they think what they do, why they draw the conclusions they do, and most particularly why we regularly hit these brick walls where we can stare at the same thing and see something completely different.
Amusingly, the reason it's suddenly so clear to me is because I'm already intimately familiar with another example of the same model of thought: Traditional AI. It just never occurred to me to look at the two side-by-side until today. If it was a snake, it would have bit me.
Part of what led me to this realization is a recent but familiar conversation I had with someone I know who is intelligent but of a mystical bent. Such people generally see me as being a "rational" thinker, and are highly skeptical of rational thought; and I am generally dismissive of anyone who is explicitly irrational, because where can you go with that? But of course few people are truly explicitly irrational, if one straightens out all the semantics. Much comes down to huge differences in what one calls "rational thought".
You see, what they are skeptical of is the same thing I am skeptical of. They are skeptical of old-fashioned AI style rule-based thinking, which appears on the surface and to its proponents to be provably correct, and yet in practice very often is clearly not. To wit, one line of orthodox Objectivist reasoning goes like this: Everything that exists has an identity (A is A), and a thing's identity is defined by the boundary between that thing and everything else, and since the universe exists and therefor has an identity, it cannot possibly be infinite in size because then it would have no boundary.
No, I'm not making this up.
Yes, I know a fair number of seemingly very intelligent people who are certain that the universe cannot possibly be infinitely large---not because of any empirical astronomical observations, but because it's philosophically impossible. Some of them may be older and wiser now, but I have copious email records of long and drawn out debates on such topics.
It is a great innovation of evolution that man is able to perform symbolic reasoning. Many people think of this as the basis of all reasoning, as the element that makes us intelligent as opposed to just reactionary. For these reasons, the dawn of AI attempted to emulate this process, with hard concepts represented by words or tokens, and logical rules of inference that were always correct. But the further they got into it, the more mysteries appeared. No matter how many rules they wrote down, there were always exceptions not covered or worse yet contradicting. Surely then more rules would fix it... because we humans are able to do it! And so they tried for decades, with no success. The results were always brittle--so hard (in their logic) that the smallest imperfection would become a huge crack. Even the simplest concept requires some hand-waving approximations to fit it into a nice rule, and every hand-wave is ultimately an imperfection. Piles upon piles of these, face it with reality, and *snap*! And yet, many of these systems were provably internally consistent. The AI itself knew exactly what was going on, and could generate a trail of logical productions to prove it. Just like an Objectivist.
The fallacy is in believing that symbolic reasoning is synonymous with intelligence. In truth, 99% of intelligence involves no symbolic reasoning at all, rather a simple heterarchical statistical model of empirical reality. I.e., real intelligence is by definition grounded in the domain of percepts--a concept which even Ayn Rand understood, but which she failed to fully appreciate when she simplified and overgeneralized her abstractions thereby disconnecting them from elements of their own foundations. While she appreciated the need for there to be a path to ground, she did not appreciate the need to preserve all paths to ground. Hand wave, hand wave, we have to omit a few details in order to make a general rule--something she did semi-explicitly under the name "measurement omission".
The failure of logical reasoning is not that it is incorrect, it is simply that it only applies to that which abides by all of its premises. This means that the slightest deviation of an object under consideration from the definitions which absorb that object into the symbolic realm effectively undermines any provable correctness of the results. Symbolic reasoning is at best a guide that helps us search very large problem spaces very quickly--it is a tool of efficiency, not a provider of truth. Even in its most diligent and refined forms, such as in the mathematical treatment of physics, the equations are only applicable to reality in as much as the premises behind them are. E.g., Newtonian physics is only correct within the limits of empirical measurement (human-scale size and time) from which Newton's own premises were derived.
But symbolic reasoning is widely practiced in a much less diligent form--that of thinking with words. And for most, the conclusions they draw in this form are gospel. They are the stereotypical "rationals" who give rationality a bad name; and I believe they are, now days, the vast majority.
The problem is that fundamentally we suck at symbolic reasoning, quite contrary to many people's assumptions. We simulate the process of logical thought by modeling perceptual patterns that generally coincide with it. The reason all those AI systems failed is because they were too good at logical thought, and not only did that have little bearing on how humans actually solve anything but is explicitly in contradiction with it. Symbolic reasoning, as applied in the human mind, provides an exact answer which is possibly approximately correct. When one forgets the caveats and puts too much faith in their logical reasoning, the other 99% of the mind, where the real intelligence lies, begins to atrophy, and with it all the connections to empirical reality.
Which brings me to my last amusing connection. Someone once told me there are only eight hundred real people in the world, and the rest are androids. My appreciation for that perspective is now deeper than ever.
[<< | Prev | Index | Next | >>]