[<< | Prev | Index | Next | >>]

Tuesday, May 16, 2006

Consciousness Is



A friend of mine recently attended a conference on the philosophy of mind (i.e., what is the mind vs. the brain; can a computer ever be "conscious", that sort of thing). It seems there are still many (perhaps even a vast majority) who believe the mind/consciousness could never be achieved in a "computer program"--that this mind/brain "dualism" is a false dichotomy, and that the mind is clearly more than merely an information process. Following is an excerpt from my quick email to him on the matter. (Some day I ought to write all this up properly...)

What about writing a computer program that operates at some middle symbolic level? (Not explicitly modeling neurons, but using Bayesian principles or some other learning algorithm.) My intuition says that such a program could perhaps evidence degrees of 'intelligence' but would not be conscious. At least not conscious like us.

I believe "conscious like us"--in the sense of introspective experience which I assume is what you are referring to--amounts essentially to the "simple" fact that we perceive whatever we are focusing on at any instantaneous moment. I.e., even some middle-symbolic level implementation could have the same behavior, and would have the same sort of introspective experience. Think about every bit of subjective evidence you have of your own consciousness/awareness, and tell me what bit of it doesn't amount to you observing something (about yourself). I.e., just like your eyes dart about a room to pick up the details of various areas of interest, your internal perception darts about your echoic memory, your visual memory, your emotional percepts, and so on; and the "homunculus" looking at all of this is, imo, a dirt-simple, ancient algorithm for search, planning, and navigation. Consciousness/awareness is the union of those two things -- the "intentional" navigational core combined with a rich perceptual heterarchy.

When you see yourself thinking, you're just observing yourself observing. Consider if you had a black box that could automatically learn a heterarchy of abstractions from visual input (i.e., so that high-level symbols such as "piano" would automatically be discovered and then triggered when visually focusing on a piano). Now imagine if your search algorithm uses that heterarchical model to do mental planning and exploration (e.g., "visualize a piano..."). What's to keep parts of that black box from learning a heterarchical abstraction of its own thinking processes -- i.e., from learning to observe the temporal conceptual patterns that the navigation engine is generating internally? So, those abstractions become more high-level inputs that the navigation engine gets to work with (ultimately through associations with pain and pleasure and other hardwired measures of good and bad).

Consider that people would still generally consider themselves conscious when they are "in the flow" even though that is a state essentially defined by a lack of introspection. Introspection is just another level of observation--which happens to be meta observation, but there's nothing categorically different about that from simple external observation other than where the input is coming from.

Wherever our navigation engine directs our focus in the moment, our perceptual apparatus abstracts for us, and these abstractions, particularly the highest level ones, hold emotional associations which our navigation engine uses to decide (via simple, old, hard-coded algorithms--but don't underestimate the utility of cortically-learned associations) what to do with them. That's pretty much the whole process of consciousness, imo.



[<< | Prev | Index | Next | >>]


Simon Funk / simonfunk@gmail.com