Sunday, February 12, 2006

on intelligence: intelligence off

Generally speaking, I tend to be skeptical of things Wired gets excited about. It's just too golly-gee-ain't-we-clever, and gets euphoric over re-invention of yet another wheel.

Predictably, "On Intelligence" gives me about the same level of excitement. I received this book as a gift and have spent the past day reading it. Admittedly, Jeff Hawkins (or his co-writer, Sandra Blakeslee) is adequately readable, but I would not call it engaging. Those who have never read or thought about brains, cognition, or artificial intelligence will finish the book feeling like they've learned something. To everyone else, the book is all old information and old ideas – ground already well-trod.

One encounters afterthought sentences such as "Scientists use the words anterior for the front and posterior for the back," and one realizes that the book's intended audience are high school students. Any other assumption and Hawkins is simply insulting the intelligence and knowledge of his readers.

The writing tends toward windiness, spots of self-aggrandizement, and is glutted with lengthy analogies, several of which are gross oversimplifications. (Toward the end of the book he says without irony, "False analogy is always a danger.") There are in fact so many analogies, it's hard to find the actual ideas. Actually, there aren't that many ideas.

For more readable and far more original concepts, I recommend Julian Jaynes' "Origin of Consciousness in the Breakdown of the Bicameral Mind", or almost anything by Daniel Dennett on consciousness.

Hawkins misses the point of the Turing test, wishes to see "genuine understanding" in machines, and denigrates all historical attempts (e.g. neural networks) to create it. "Understanding" and "intelligence" are words with many meanings. Turing's imitation of intelligence is based on results: the machine doesn't have to understand what it's doing, so long as the result or response is indistinguishable from that of a human. Hawkins insists a machine must "understand" what it's doing in order to be "intelligent." As true as this may be (using his definitions of these terms), I doubt human-style understanding will happen for centuries, if ever. It's far too complex. Plus, I do not want a machine quoting Orwell or Kafka at me to explain why it wasn't able to do its task, which is what it would do if it really did understand what it was doing.

However, imitation of understanding exists now, and can be further developed. We see its basics in phone systems that interpret spoken commands – our Bay Area's 511 traffic/transportation reporting system is very cool – though it helps when it's programmed to bring in a human when you tell it you want a large chicken pesto pizza with bacon bits. Intelligent? Technically, no. Can it help me pick the faster road to drive? Hell yes, many times. Do I want it to be any more intelligent than that? It would be interesting if it offered to recommend a route – a feature probably fairly easy to program in (source, destination, pick a route by factoring in highest average speed and minimizing additional distance). But if intelligence means it'll tell me I should turn the wipers on because it's raining 10 miles down the road – no. For that, I have my mother.

The only interesting idea put forth (not an idea original to Hawkins) is that intelligence is based primarily on learned patterns, the ability to generalize, and the ability to predict. This is, indeed, all we do, starting from babyhood. We learn the varying tones of voice that belong to one parent or another, the parts of them that change (clothing, hair, makeup) and those that do not, and the fact that when certain sounds or actions occur, certain other ones follow. We see the mobile spinning around and learn that the orange duck is at this spot now, and should be at another spot in a certain amount of time – and if it doesn't, we'll start screaming. We learn subtle things, too: the fear a parent feels when confronted by, say, an aggressive dog, is echoed in or exhibited by certain postures or motions, which are transmitted to the child well before any overt command or words are spoken. (And in this single paragraph I have pretty much summarized about sixty pages.)

Can a machine have sufficient memory and connections to do all that? Hawkins' answer is a bit too smoke-and-mirrors. He describes the qualities of silicon chips, says they're insufficient, then breezily assures the reader that sometime in the future there will be something sufficient. And that it should be modeled on his alternate view of the cortex, described in his most technical chapter, "How the Cortex Works". (Hawkins presents this hierarchy as his original concept, but it is simply the traditional model with more detail added.) How the model and the nonexistent chips will transform into an intelligent system, Hawkins does not say. It will simply happen. That's to be expected – he's a CEO, and details are for engineers.

To quote the immortal Richard Feynman:
What often happens is that an engineer has an idea of how the brain works (in his opinion) and then designs a machine that behaves this way. This new machine may in fact work very well. But...that does not tell us anything about how the brain actually works, nor is it necessary to ever really know that, in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how feathers are designed in order to make a flying machine... It is therefore not necessary to imitate the behavior of Nature in detail in order to engineer a device which can in many respects surpass Nature's abilities.

Hawkins dips his toe, in a mercifully brief chapter, into the concepts of consciousness, creativity, and reality. It reads a bit too much like a self-improvement book, and is a chapter better skipped. Actually, skip the entire book, please, and read Jaynes and Dennett.

No comments: