Both
Julian pointed me to Delix & Dum’s roadmap for research into complexity. They define the field this way:
“Broadly speaking, complex systems consist of a large number of heterogeneous highly interacting components (parts, agents, humans etc.). These interactions result in highly non-linear behavior and these systems often evolve, adapt, and exhibit learning behaviors.”
This description points toward two areas where I suspect our intuition is likely to fail: software, and exponential growth.
- Large pieces of software consist of many interacting components, and failure modes are often non-linear. (See my Software project management: an Innately Hard Problem.)
- Kurzweil argues that we are unable to grasp the implications of exponential growth because our intuitions lead us to linear extrapolations. Exponential growth is monotonically non-linear, and in fact a simple case of non-linear behavior in general.
Kyril referred me to Ko Kuwabara’s Linux: A Bazaar at the Edge of Chaos, which treats the Open Source development process as a complex adaptive system. He analyses the development of Linux as an evolutionary process, and claims to see self-organization at work.
Two questions come to mind:
- Is complexity “human-hard”, that is, is it hard for humans to think about?
- If so, why?
I’ve argued that non-linear equations are certainly “plain ol’ hard”, that is, difficult to solve whether one’s human or not, because their solutions are in general exquisitely sensitive to the initial variables, leading to lousy predictions.
I suspect complex systems are hard for humans to grasp, not least because studying such systems is a relatively research field; if it were an easy topic, it would’ve been addressed much earlier. As a practical matter, the difficulty most of us have thinking deeply and broadly about social problems suggests that even something we’ve evolved to be good at (inter-personal dynamics) becomes difficult at large scale.
The difficulty comes in part from the large number of variables in play in complex systems. It has been known for some time (cf. George Miller’s 1956 paper The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information) that we have an effective channel capacity of about 2.5 bits for thinking about a number of cognitive and perceptual tasks. Thinking fluently about complex systems would require us to hold a much larger number of concepts in short term memory.
A further complication is that these systems require us to think about processes rather than things. According to Carey and Markman [1],
“Adults represent the world in terms of enduring solid physical objects. People, rocks, and tables are all conceptualized in the same ways with respect to certain core principles: all physical objects are subject to the constraints that one of them cannot be two places at the same time, and that two of them cannot coincide in space and time. Objects do not come into and out of existence, and they occupy unique points in space and time. These aspects of our representation of the world are certainly deeply entrenched in the adult conceptual system.”
The interactions between large numbers of components certainly seem quite dissimilar from the concrete mechanisms which the foundations of our conceptual system are tailored to.
----------
[1] Susan Carey and Ellen M Markman, Cognitive Development, Ch. 5 p. 203 in the survey volume Cognitive Science, Bly and Rumelhart (eds.)