Complexity: 5 Questions, 12 November 2008, Automatic Press / VIP, ch. 24, pp. 131-134, “Stephen Wolfram,” Carlos Gershenson, ed.
1. Why did you begin working with complex systems?
It’s a slightly complex story. I started working in physics when I was an early teenager. Mostly I worked on particle physics, but I also thought a lot about the foundations of thermodynamics and statistical physics. And around 1978 I got very interested in the question of how complex structure arises in the universe—from galaxies on down.
Soon thereafter, I became very involved in computer language design—creating a precursor of what is now Mathematica—and was very struck by the process of going from the simple primitives in a good language, to all of the rich and complex things that can be created from it.
In 1981 I felt like taking a little break from my activities in doing physics, computing, and starting a company. I decided to do something “fun”. I thought I would look back at my old interests in structure formation. I realized that there was the same central question in lots and lots of fields: given fairly simple underlying components, how does such-and-such a complex structure or phenomenon arise?
I decided to try to tackle that question—as kind of an abstract scientific question. I think what I did was very informed by my experience in creating computer languages. I tried to find the very simplest possible primitives—and see what happened with those. I ended up studying cellular automata, and using those discovered what I thought were some pretty fundamental facts about how complexity can arise.
2. How would you define complexity?
Formal definitions can get all tied up in knots—just like formal definitions of almost anything fundamental: life, energy, mathematics, etc. But the intuitive notion is fairly clear: things seem complex if we don’t have a simple way to describe them.
The remarkable scientific fact is that there may be a simple underlying rule for something—even though the thing itself seems to us complex. I found this very clearly with simple cellular automata. And I’ve found it since with practically every kind of system I can define. And although they weren’t really recognized as such, examples of this had been seen in mathematics for thousands of years: even though their definitions are simple, the digits of things like sqrt 2 or pi, once produced, seem completely random.
I might say that sometimes our notions of complexity end up being very close to randomness, and sometimes not. Typically, randomness is characterized by our inability to predict or compress the data associated with something. But for some purposes, perfect randomness may seem to us quite “simple”; after all, it’s easy to make many kinds of statistical predictions about it. In that case, we tend to say that things are “truly complex” when the actual features we care about are ones we can’t predict or compress.
This can be an interesting distinction—but when it comes to cellular automata or other systems in the computational universe, it tends not to be particularly critical. It tends to be more about different models of the observer—or different characterization of what one is measuring about a system—than about the fundamental capabilities of the system itself.
3. What is your favorite aspect/concept of complexity?
That it’s so easy to find in the computational universe.
One used to think that to make something complex, one would have to go to a lot of trouble. That one would have to come up with all sorts of complicated rules, and so on. But what we’ve found by just sampling the universe of simple programs is that nothing like that is the case. Instead, it’s really very easy to get complexity. It’s just that our existing science and mathematics developed in such a way that we avoided looking at it.
The ubiquity of complexity has tremendous consequences for the future of science, technology, and in a sense our whole world view.
4. In your opinion, what is the most problematic aspect/concept of complexity?
In the early 1980s I was very excited about what I’d discovered about the origins of complexity, and I realized there was a whole “science of complexity” that could be built. I made quite an effort to promote “complex systems research” (I would have immediately called it “complexity theory”, but wanted to avoid the field of theoretical computer science that was then using that name).
But it’s always a challenge to inject new ideas and methods. People liked the concept of complexity that I’d outlined, and increasingly used it as a label. But I was a bit disappointed that the basic science didn’t seem to be advancing. It seemed like people were just taking whatever methods they already knew, and applying them to different systems (usually with rather little success), and saying they were studying “complexity”.
It seemed to me that to really study the core phenomena—the true basic science of complexity—one needed a new kind of science. So I ended up spending a decade filling in that vision in my book A New Kind of Science. I’m happy to say that since my book appeared, there’s been an increasingly good understanding of the new kind of basic science that can be done. There’s been more and more “pure NKS” done—that gives us great raw material to study both the basic phenomenon of complexity, and its applications in lots of fields.
5. How do you see the future of complexity? (including obstacles, dangers, promises, and relations with other areas)
It’s already underway… but in the years and decades to come we’re going to see a fundamental change in the approach to both science and technology. We’re going to see much simpler underlying systems and rules, with much more complex behavior, all over the place.
Sometimes we’re going to see “off the shelf” systems being used—specific systems that have already been studied in the basic science that’s been done. And often we’re going to see systems being used that were found “on demand” by doing explicit searches of the computational universe.
In science, our explorations of the computational universe have greatly expanded the range of models that are available for us to use. And we’ve realized that rich, complex behavior that we see can potentially be generated by models that are simple enough that we can realistically just explicitly search for them in the computational universe.
In technology, we’re used to the standard approach to engineering: to the idea that humans have to create systems one step at a time, in a sense always understanding each step. What we’ve now realized is that it’s possible to find great technology just by “mining” the computational universe. There are lots and lots of systems out there—often defined by very simple programs—that we can see do very rich and complex things.
In the past, we’ve been used to creating some of our technology just by picking up things in nature—say magnets or cotton or liquid crystals—then figuring out how to use them for our purposes. The same is possible on a much larger scale with the abstract systems in the computational universe. For example, in building Mathematica, we’re increasingly using algorithms that were “mined” from the computational universe, rather than being explicitly constructed step-by-step by a human.
I think we’re going to see a huge explosion of technology “mined” from the computational universe. It’s all going to depend on the crucial fundamental scientific fact that even very simple programs can make complexity. And the result is that in time, “complexity” will be all around us—not only in nature, but also in the technology we create.
When I started working on complexity nearly 30 years ago, the intuition was that complexity was a rare and difficult thing to get. In the future, everyone will be so exposed from an early age to technology that’s based on complexity that all those ideas that seem so hard for people to grasp now will become absolutely commonplace—and taken for granted.