Stephen Wolfram Q&A
Submit a questionSome collected questions and answers by Stephen Wolfram
Questions may be edited for brevity; see links for full questions.
November 12, 2008
From: Interview by Carlos Gershenson, Complexity: 5 Questions
How would you define complexity?
Formal definitions can get all tied up in knots—just like formal definitions of almost anything fundamental: life, energy, mathematics, etc. But the intuitive notion is fairly clear: things seem complex if we don’t have a simple way to describe them.
The remarkable scientific fact is that there may be a simple underlying rule for something—even though the thing itself seems to us complex. I found this very clearly with simple cellular automata. And I’ve found it since with practically every kind of system I can define. And although they weren’t really recognized as such, examples of this had been seen in mathematics for thousands of years: even though their definitions are simple, the digits of things like sqrt 2 or pi, once produced, seem completely random.
I might say that sometimes our notions of complexity end up being very close to randomness, and sometimes not. Typically, randomness is characterized by our inability to predict or compress the data associated with something. But for some purposes, perfect randomness may seem to us quite “simple”; after all, it’s easy to make many kinds of statistical predictions about it. In that case, we tend to say that things are “truly complex” when the actual features we care about are ones we can’t predict or compress.
This can be an interesting distinction—but when it comes to cellular automata or other systems in the computational universe, it tends not to be particularly critical. It tends to be more about different models of the observer—or different characterization of what one is measuring about a system—than about the fundamental capabilities of the system itself.