Stephen Wolfram Q&A
Submit a questionSome collected questions and answers by Stephen Wolfram
Questions may be edited for brevity; see links for full questions.
featured
February 6, 1998
From: Interview by David Stork, Hal's Legacy: 2001's Computer as Dream and Reality
Do you really think that we can get a handle on profoundly hard, high-level problems of AI—such as my favorite, scene analysis—by looking at something as “simple” as cellular automata?
Definitely. But it takes quite a shift in intuition to see how. In a sense it’s about whether one’s dealing with engineering problems or with science problems. You see, in engineering, we’re always used to setting things up so we can explicitly foresee how everything will work. And that’s a very limiting thing. In a sense you only ever get out what you put in. But nature doesn’t work that way. After all, we know that the underlying laws of physics are quite simple. But just by following these laws, nature manages to make all the complicated things we see.
It’s very much connected with the things I talk about in A New Kind of Science. It took me more than 10 years to understand it, but the key point is that even though their underlying rules are really simple, systems like cellular automata can end up doing all sorts of complicated things—things completely beyond what one can foresee by looking at their rules, and things that often turn out to be very much like what we see in nature.
The big mistake that gets made over and over again is to assume that to do complicated things, one has to set up systems with complicated rules. That’s how things work in present-day engineering, but it’s not how things work in nature—or in the systems like cellular automata that I’ve studied.
It’s kind of funny: one never seems to imagine how limited one’s imagination is. One always seems to assume that what one can’t foresee can’t be possible. But I guess that’s where spending 15 years doing computer experiments on systems like cellular automata instills some humility: over and over again, I’ve found these systems doing things that I was sure wouldn’t be possible—because I couldn’t imagine how they’d do them.
It’s like bugs in programs. One thinks a program will work a particular way, and one can’t imagine that there’ll be a bug that makes it work differently. I guess intuition about bugs is a pretty recent thing: in 2001 there’s a scene where HAL talks about the fact that there’s never been a “computer error” in the 9000 Series. But the notion of unforeseen behavior that isn’t due to hardware malfunction isn’t there.
Anyway, about “hard” problems in AI: my own very strong guess is that these will be solved not by direct engineering-style attacks, but rather by building things up from simple systems that work a bit like cellular automata. It’s somewhat like the hardware-versus-software issue we discussed earlier: in the end I don’t think elaborate special-purpose stuff will be needed for problems like scene recognition; I think they’ll be fairly straightforward applications of general-purpose mechanisms. Of course, nobody will believe this until it’s actually been done.