“Complex systems behave unpredictably” – Gupta’s law of complex systems
by Vinay Gupta • June 26, 2010 • Everything Else • 5 Comments
(this post was originally called “a little intellectual history.”)
A thing of any importance is discovered at least three times.
In the 1980s I was a bright young thing. Several of my teachers had identified me as a once-in-a-lifetime student, but I had no idea what I wanted to do. Physics was the obvious course, but something niggled: Feynmann, Newton, Einstein and all the rest. Was I really going to be able to add anything?
It crystallized when I was about 14: to get anything done in physics, you had to be extremely smart, extremely hard working and extremely lucky. All at the same time. I did not like those odds.
Next door, in computer science, any one of the three would do.
As things happened, a combination of personal and financial problems washed me out of Edinburgh University’s computer science department, and from there into an unfortunate start-up. But that is a story for another day.
When I was a teenager, in the late 1980s, we had the first flush of scientific discoveries off the back of cheaper and cheaper computing. Danny Hillis (with help from Richard Feynmann) had built the Connection Machine, and James Glick had published Chaos – Making a New Science.
I ate it all up. I read, and I coded, and I came to understand. The landmark work for me was Rumelhart & McClelland’s Parallel Distributed Processing which clearly laid out an agenda – a programmable agenda for making minds.
But then I’d drifted. I wasn’t going to be an academic. It wasn’t even certain I was going to survive. I took up meditation in a serious way to deal with the emotional issues left by having two mentally ill parents, then continued to approach the question of consciousness in older, more traditional ways.
What I took from this period, though, was something remarkably simple which I’m amazed to discover is still being discovered: complex systems behave unpredictably.
Consider the double pendulum.
This simple device is badly behaved – chaotic – once given a good hard push. You know that it won’t go outside of the radius of the arms, but other than that, past very low energy pushes, the system can be just about anywhere at just about any time.
This kind of thing is all over the place in business, government and nature. You see it in predator/prey population dynamics, you see it in weather – vague outlines of the limits of the system, and total unpredictability in the face of a good, solid push.
In the 20 years since I stopped seriously studying connectionism we have not made a brain. And complex systems still behave unpredictably.
The Hexayurt Project, and everything around it, are informed by my early study of complexity science. The rigorous, absolute insistence on simplicity, on loose coupling, on local control, on enhanced improvisation – all of that comes from the simple lesson: complex systems behave unpredictably.
If you want to be able to keep people alive do the support services with simple systems, not complex ones. Avoid 60 party coalitions. Avoid “clusters” of whatever kind. Simple, direct, authoritarian control with explicit chains of command on one side (military) and completely decentralized agent-driven response on the other (civilians.)
Anybody who knows complexity science at all can look at the org chart for a typical humanitarian operation and predict failure. Groups like the Center for Complex Operations exist to address this problem by taking grant funding and hoping it will go away.
What everybody hopes is that you will one day get the kind of sleek, cooperative efficiency that you get in birds flocking and fish schooling – unity of effort without unity of command as a wise man once put it. Here’s the fundamental problem. Can you guess what I’m about to say?
Complex systems behave unpredictably.
It all works, just fine, until something goes wrong. The birds and fish got that good at acting as one against a background that changed little in hundreds of millions of years. The species are incredibly stable, and the cooperation was evolved at a hardwired level. You take fifty fish of different species and try to teach them how to school. How long is it going to take?
Human beings are not regular agents. Organizations display a huge range of distinct kinds of response. Putting these things in a bucket and stirring them with a stick is not going to produce sleek, efficient action. It’s going to produce what we see in practice, a honking great mess.
The economy, and complex societies, work because they are holding close to some very old human invariants – trade, pyramidal organizations, social circles. We’re pushing our ability to collaborate as Naked Apes to the limit as we introduce tier after tier of social innovation and new technology, but it all still basically holds together on human nature.
But complex systems behave unpredictably still holds. Sudden collapses in systems complexity occur at a level we can’t comprehend as human beings – nobody can really tell you for sure what happened in 1928 to 1932 – only that stresses accumulated and then
Life support systems – food and water and power and so on – need to be as simple as we can make them. They need to rely on old, stable systems if they are going to be low risk. There is a lot to be said for a solar panel sitting on a roof turning out energy within fairly predictable bounds day-in, day-out. I trust that in a way I am never going to trust the power grid, no matter how well maintained.
An addendum. The economy, understood from a connectionist perspective, is a great big decentralized “brain.” It exists to figure out what people want.
That’s why we look at economists in horror as they talk about stabilizing the markets.
Its an interesting one, as you say it is repeatedly discovered. This is probably because it is easy to forget. In mathematics we want results that model, predict and control. We ignore the paths that lead to a lot of “I don’t know”. Yet there is a fascinating question out there. Can we start to describe what we can and cannot know?
By looking at the negative question of when our ability to control and predict will stop being stable we can work out how to design to stay stable (or fail to stable).
After all complex systems are not always unpredicatble. My glass of water is very complex, unimaginable numbers of molecules forming comlex weakly connected chains that then break apart. Yet at the level I am interested in it, it is predictable.
You seem to have a self-confidence and seeming certainty that reminds me of Stafford Beer. Is that a fair reflection of how you feel? I’d be interested to know.
Pingback: The Bucky-Gandhi Design Institution › More on the Goat Rodeo Index
Dan: to (mis)quote Roy Castle, meditation is what you need 😉