Monday, February 13, 2006

meanderings on machine intelligence

In the previous post, intelligence is based primarily on learned patterns, the ability to generalize, and the ability to predict. Hawks, according to Hawkins, would not qualify as intelligent because they have no neocortex and are unable to predict (anticipate a certain outcome based on past experience.) He claims prediction, not behavior, is the basis of intelligence.

However, I assure you that hawks predict quite well. They return to perches from which they've caught quarry in the past. They can remember these perches after not seeing them for months. (Eagles are higher up on the intelligence scale, as they have rather good memories.) They remember that specific actions have specific results, and are disconcerted when the results don't happen. They can recognize the content of pictures, though they may have trouble distinguishing them from the real thing. They know what a 'dog' is because it looks like one.

Is this intelligence? Is a neocortex actually required to be considered intelligent? Or is anthrocentricity, or mammalocentricity, making this claim? What do mice have that hawks do not? (Answer: calories.) Humans cannot see infrared or ultraviolet unaided. Would we be perceived as unintelligent by creatures who can naturally see these colors? (Kestrels can see into the UV range, which helps them find mice.)

What do humans have that every other animal does not? Language. The ability to convert ideas and objects into symbols, the ability to agree that a specific symbol means the same thing to all interested parties, and the ability to understand that the symbols can completely stand in place of the idea or object itself. We've had this ability since we were cavemen drawing bisons. And it hinges on the facts that we have well-developed vocalization and that opposable thumb with which to draw symbols. If I tell you that my bird has one white talon, you could probably pick him out of an entire breeding project of Harris hawks. I need not describe to you the color of his eyes or plumage, or give you a photograph (which wouldn't do much good anyway, since all Harris hawks look alike.) I may not even need to tell you which talon is white, or even his gender.

Computers are entirely about symbols: letters and numbers. However, they do not easily understand ideas and objects. They cannot behave in an intelligent manner without humans telling them what bits of reality are important and what aren't. Because only humans have bridged the gap between symbols and the ideas and objects they represent.

The second aspect to this example is that I automatically know which quality is most unusual and distinguishes him from other Harrisi. This is another inherent human quality, a survival skill: notice things that deviate from the norm. Perhaps this is the way to make machines intelligent. Put a bunch of them in a hazardous environment, destroy the ones that don't observe the right details, and let the others to see the result of insufficient vigilance. Of course, they wouldn't care about the deaths of their companions unless they were programmed to do so.

Seriously now, can this bridge between symbol and reality be programmed into a computer? With the fastest processors and a memory capacity as large as can be imagined – no. It's theoretically possible to build a machine with video and tactile inputs, and give it enough basics to interpret the input. You can tell it that reality is what comes to it through these inputs, and it is separate but not entirely separate from the factual knowledge that we program into it. (The jumpy thing on the floor is a dog, and the sacked-out thing in the corner is also a dog. A Jack Russell is a type of dog, that's the jumpy one; the sacked-out one is a Lab. The Lab's name is Woofer and the Jack Russell's name is Tweeter, you are a robot, etc.)

But this would only be a semblance of a bridge. It can make basic conclusions if we program it to do so. It could pick up fifty thousand facts just sitting in an empty meeting room, but it cannot distinguish between important facts and unimportant facts. And the task of defining those distinctions is too enormous for humans to manage. ("Yes, that's a chair. That's a chair too. And this is also a chair. That's a couch. This is a chair....")

Next question: can computers be made intelligent in the way Hawkins defines it, to be able to predict and anticipate? Yes and no. They can predict and anticipate, but again, only as behaviors programmed by humans, because they still don't have the bridge. Hawkins' idea of modeling computers after human brains will not automatically make them intelligent.

It's like handing me tons of girders, wood, and nails and telling me to build an Eichler. Given all the materials and tools, I could build a pretty nice mews (read: shack), but it won't be a liveable house and sure ain't going to be an Eichler. I would need plans, instructions for every simple little thing from where to drill holes for the wiring to how to lay the heating pipes in the floor. Fortunately, I do know how to wield a hammer, which a computer still needs to be told how to do.

No comments: