I don’t know.
You don’t know either, even if you’re a lawyer or scholar who’s written confident diagnoses of, and persuasive curative prescriptions for, various policy problems.
If you’re a regulator, you know you don’t know.
Decision makers have always operated in a world of complexity, contradiction and confusion: you never have all the information you’d like to make a decision, and the data you do have are often inconsistent. It is not clear what is happening, and it is not clear what to do about it. What’s most striking about the last century is that policy makers seem to have been persuaded by economists that they have more control, and more insight, than they used to.
We have less control over the world than we’d like. We are either confronted by unwanted situations we cannot prevent, or desired situations are precluded. We would like to prevent unwanted situations, but can’t; or we would like favorable circumstances to continue, but they don’t.
There is a small part of the world where the will has effective control; for the rest, one has to deal with necessity, i.e. circumstances that arise whether you will or no. Science and technology since the Enlightenment has dramatically widened our scope of control; economics has piggy-backed on the success of classical physics to make large claims about its ability to explain and manage society. However, this has had the unfortunate consequence that we no longer feel comfortable accepting necessity. If a situation is avoidable – say, postponing the moment of death through a medical intervention – then it becomes tempting to think that when it comes, someone or something can be held responsible.
As Genevieve Lloyd tells it (and I understand it) in Providence Lost (2009), our culture opted to follow Descartes in his framing of free will: we should do the best we can, and leave the rest to divine Providence, which provides a comforting bound to our responsibilities. In the absence of providence, however, we have no guidance on how to deal with what lies beyond our control. As Lloyd puts it, “the fate of the Cartesian will has been to outlive the model of providence that made it emotionally viable.” She argues that Spinoza’s alternative account of free will, built on the acceptance of necessity, is better suited to our time; there is freedom in how we shape our lives in the face of necessity, and a providential deity is not required.
Our Cartesian heritage can be seen in the response to the financial collapse of recent years: someone or something had to be responsible. If only X had done Y rather than Z… but an equally plausible account is that crises and collapse are inevitable; it was only a matter of time.
I submit that the best response to an uncertain and ever-changing world is to accept it and aim at resilience rather than efficiency. Any diagnosis and prescription should always be provisional; it should be made in the knowledge that it will have to be changed. Using efficiency as the measure of a solution, as neoclassical economics might, is the mark of the neo-Cartesian mind: it assumes that we have enough knowledge of the entire system to find an optimum solution, and that we have enough control to effectuate it. In fact, an optimum probably doesn’t exist; if it does exist, it’s probably unstable; and even if a stable solution exists, we have so little control over the system that we can’t implement it.
The best conceptual framework I’ve found for analyzing problems in this way is the complex systems view, and the most helpful instantiation is the approach to managing ecosystems encapsulated in C. S. Holling’s “adaptive cycle” thinking. (See e.g. Ten Conclusions from the Resilience Project). The adaptive cycle consists of four stages: (1) exploitation of new opportunities following a disturbance; (2) conservation, the slow accumulation of capital and system richness; (3) release of accumulation through a crisis event – cf. Shumpeter’s creative destruction; and (4) reorganization, in which the groundwork for the next round is laid.
Two techniques seem to be particularly helpful in applying this approach to governance: simulation and common law. Simulation and modeling exploit the computing power we now have to explore the kinds of outcomes that may be possible given a starting point and alternative strategies; it gives one a feel for how resilient or fragile different proposed solutions may be. Simulation may also help understand outcomes; for example, Ofcom uses modeling of radio signal propagation rather than measurement to determine whether licensees in it Spectrum Usage Rights regime are guilty of harmful interference with other licensees. (See e.g. William Webb (2009), “Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms”.)
A common law approach helps at the other end of the process: Jonathan Sallet has argued persuasively that common-law reasoning is advantageous because it is a good way of creating innovative public policies, and is a sensible method of adapting government oversight to changing technological and economic conditions.
But I could be wrong…
Update 12/28/2009: See the fascinating comments from Rich Thanki, below. He takes two salient lessons from complexity theory: avoid monoculture, and develop rules of thumb. He also provides more of the usual quick Keynes quote about "slaves of some defunct economist."
1 comment:
As someone who's often been tasked with giving the reading on the scales on either side of a policy decision, I agree. What I do is next to sophistry :)
However, if complex systems science is the most insightful analysis of where we are, then maybe it provides salient lessons on how we might regulate.
The first could be to avoid monoculture (essentially your resilience point), try many different things. A priori simulations(especially in an extremely sensitive system) will only reveal so much, however by enacting trials of a number of potential approaches a regulator may be able to evolve a good strategy. I was struck by the cost containment experimentation that's in the Senate Healthcare Reform Bill, launching 4 or 5 different trials of methods to cut costs in Medicare.
The second is to maybe adopt the philosophy behind the 'parallel terraced scan' scan that Hofstadter used in genetic algorithms, to develop regulatory rules of thumb. My starter for ten would be:
"Do more of what works, do less of what doesn't, and try a variety of things that haven't been tried before and keep an eye on them. Oh and BE CAREFUL."
This is what's always amazed me about the unlicensed spectrum debate. The FCC's 1985 decision was a punt but all the evidence suggests that it's worked rather marvellously. However, regulators seem unbelievably wary in pushing ahead with more of this approach, even though licensed spectrum sits there largely empty. However, the spectrum markets ideas keep being pushed, even though evidence of their success is slim. A testament to persuasive powers of the Chicago School, I guess.
Whit did Keynes say again? "The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back."
Post a Comment