[<< |
Prev |
Index |
Next |
>>]
Monday, December 05, 2005
Beyond the Singularity
Following is a recent email to a friend who was concerned
about the impending
Singularity.
(I hadn't actually read that link until just now,
had no idea how literally my portrayal matched the views expressed
there.)
Imo, the singularity proponents have made
a few errors along the way which lead to an impressive set of
erroneous conclusions. The primary fallacy of the whole singularity
camp is mistaking an exponential growth curve (which is what we've been
on since the stone age, plus or minus a few hiccups like the dark ages)
for an asymptotic one. A secondary fallacy which commonly leads to the
first is to ignore the cost of one or more dimensions of a problem. To
exemplify: Achilles races the tortoise to the finish line, but Achilles
never gets there because between here and there he has to get to half
way there first. But once there, he has to get to half way again! And
so on to infinity, so he never reaches the end! This is the singularity
argument, just applied in a different direction. The error is simple:
ignoring the time each step takes. When you add that back into the
equation, and add it all up, you discover there is a finite amount of
time, so Achilles does make it. Likewise, there is this notion of
a "seed AI" or some such, where it programs itself to be smarter and
smarter and this goes super-exponential, very much like saying we
(playing Achilles) never reach some point in the future because this
other event which obsoletes us takes a linear step in half as much time
again--i.e., one day to become X amount smarter, half a day to become X
amount smarter again, a quarter a day to become X amount smarter again,
and so by the time we reach the end of the second day--the
singularity--it is infinitely smart! What's being ignored here is the
practical, physical, real-world cost of information processing. The
amount of computation theoretically possible with any finite amount of
resources is finite, and the rate of growth of efficiency in utilizing
those resources is, by historical precedent, at best exponential with a
fairly small exponent (something like the amount the GDP of the world
grows each year relative to the last). I.e., to be concrete: while it
may program itself to be smarter, it will necessarily be slower as a
result. And while it may program itself to be faster, it takes time to
do so, and physical resources remain a limitation--and physical
resources also demand time to manipulate, for chemical processes will
only happen so fast, metals bend only so fast without snapping, things
heat and cool only so fast without cracking or melting. All of these
limitations impose a time and effort cost to all progress, and they will
bend no faster for an AI than for a biological one. Will AIs
eventually be smarter than humans? Certainly. But that's just a point
of subjective interest along an exponential curve of technological
growth that's been brewing since the primordial soup.
[<< |
Prev |
Index |
Next |
>>]
Simon Funk /
simonfunk@gmail.com