Thursday, September 27, 2007

Lobbying Pays

The September 17th BusinessWeek has a fascinating look Inside The Hidden World Of Earmarks. The tell the story of how the Navy got itself an executive jet that the Pentagon didn’t ask Congress for. Gulfstream lobbied heavily, and the Navy got special funding, known as an earmark. The Georgia Congressional delegation, and Senator Saxby Chambliss in particular, were very helpful - no surprise, since the Gulfstream is built in that state.

BusinessWeek concludes that on average, companies generated roughly $28 in earmark revenue for every dollar they spent lobbying. The top twenty in this game in $100 or more for every dollar spent. For context, the magazine provides these factoids: companies in the Standard & Poor's 500-stock index brought in just $17.52 in revenues for every dollar of capital expenditure in 2006.

In Gulfstream’s case, that exec jet deal was worth $53 million. It was just one of 29 earmarks valued at $169 million given to General Dynamics (its parent) or its subsidiaries that year; a nifty 30:1 ROI given that the company spent only $5.7 million on lobbying in 2004.

Wednesday, September 26, 2007

How statutes learn

It’s a truism that rewriting telecoms law is so hard that the US only managed to do it twice in the last one hundred years. But somehow the Congress and the regulatory agencies stay busy, and stuff changes around the edges.

I was suddenly reminded of Steward Brand’s wonderful book “How buildings learn”. (If you have any interest in either architecture or history, I strongly recommend it.) He espouses an onion-layer model of buildings. Quoting from the book:

Site - This is the geographical setting, the urban location, and the legally defined lot, whose boundaries and context outlast generations of ephemeral buildings. "Site is eternal."

Structure - The foundation and load-bearing elements are perilous and expensive to change, so people don't. These are the building. Structural life ranges from 30 to 300 years, though few buildings make it past 60, for other reasons).

Skin - Exterior surfaces now change every 20 years or so, to keep up with fashion or technology, or for wholesale repair. Recent focus on energy costs has led to re-engineered Skins that are air-tight and better-insulated.

Services - These are the working guts of a building: communications wiring, electrical wiring, plumbing, sprinkler system, HVAC (heating, ventilating, and air conditioning), and moving parts like elevators and escalators. They wear out or obsolesce every 7 to 15 years. Many buildings are demolished early if their outdated systems are too deeply embedded to replace easily.

Space Plan - The Interior layout: where walls, ceilings, floors, and doors go. Turbulent commercial space can change every 3 years or so; exceptionally quiet homes might wait 30 years.

Stuff - Chairs, desks, phones, pictures; kitchen appliances, lamps, hairbrushes; all the things that twitch around daily to monthly. Furniture is called mobilia in Italian for good reason.

Brand argues that because the different layers have different rates of change, a building is always tearing itself apart. If you want to build an adaptive structure, you have to allow slippage between the differently-paced systems. If you don’t, the slow systems block the flow of the quick ones, and the quick ones tear up the slow ones with their constant change. For example, timber-frame buildings are good because they separate Structure, Skin and Services; “slab-on-grade” (pouring concrete on the ground for a quick foundation) is bad because pipes are buried and inaccessible, and there’s no basement space for storage, expansion, and maintenance functions.

He quotes the architectural theorist Christopher Alexander: “What does it take to build something so that it’s really easy to make comfortable little modifications in a way that once you’ve made them, they feel integral with the nature and structure of what’s already there? You want to be able to mess around with it and progressively change it to bring it into an adapted state with yourself, your family, the climate, whatever. This kind of adaptation is a continuous process of gradually taking care.”

There seems to be an analogy to policy making. Some things that are almost eternal, just like Site: the regulatory imperatives like taxation, public safety, and economic growth. Legislative Acts are like the slowly-changing Structure and Skin. The trade-offs and compromises they represent are hard to build, and so they’re slow to change. Then we get to regulatory rulings made within the context of legislation, the working guts of applying laws to changing circumstances and fine-tuning the details – these are like Services and Space Plan, which change every 3 – 15 years. Finally, like the Stuff in homes that move around all the time, we have the day-to-day decisions made by bureaucrats applying the regulations.

This kind of model also gives a way to ask, restating Christopher Alexander slightly, “What does it take to craft legislation so that it’s really easy to make comfortable little modifications in a way that once you’ve made them, they feel integral with the nature and structure of what’s already there?”

I imagine that DC operatives do this instinctively – but perhaps an architectural metaphor could make the process even more efficient.

Thursday, September 20, 2007

Why we need stories

One might be able to explain the Nassim Taleb's “narrative fallacy” (see The Black Swan) partly by invoking Patrick Leman’s “major event, major cause” reasoning. Leman describes it thus in The lure of the conspiracy theory (New Scientist, 14 Jul 07):

“Essentially, people often assume that an event with substantial, significant or wide-ranging consequences is likely to have been caused by something substantial, significant or wide-ranging.

“I gave volunteers variations of a newspaper story describing an assassination attempt on a fictitious president. Those who were given the version where the president died were significantly more likely to attribute the event to a conspiracy than those who read the one where the president survived, even though all other aspects of the story were equivalent.

“To appreciate why this form of reasoning is seductive, consider the alternative: major events having minor or mundane causes - for example, the assassination of a president by a single, possibly mentally unstable, gunman, or the death of a princess because of a drunk driver. This presents us with a rather chaotic and unpredictable relationship between cause and effect. Instability makes most of us uncomfortable; we prefer to imagine we live in a predictable, safe world, so in a strange way, some conspiracy theories offer us accounts of events that allow us to retain a sense of safety and predictability.”


Taleb’s account of our inclination to narrate is psychological: “It has to do with the effect of order on information storage and retrieval in any system, and it’s worth explaining here because of what I consider the central problems of probability and information theory. The first problem is that information is costly to obtain. . . The second problem is that information is also costly to store . . . Finally, information is costly to manipulate and retrieve.” (The Black Swan, his italics, p. 68)

He goes on to argue that narrative is a way to compress of information. I suspect that the compression is related to extracting meaning, not the raw information. The long-term storage capacity of the brain seems is essentially unbounded, but our ability to manipulate variables in short-term memory is very limited, to about four concurrent items. Stories provide a useful chunking mechanism: they’re pre-remembered frames for relationships. There is a relatively limited number of story shapes and archetypical character roles (cf. The Seven Basic Plots) in which cause and effect is carefully ordered and given meaning.

Taleb comes even closer to Leman when he connects the narrative fallacy with the desire to reduce randomness: “We, the members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is.” (The Black Swan, his italics, p. 69) As Leman says, “we prefer to imagine we live in a predictable, safe world.”

Friday, September 14, 2007

Algorithms Everywhere

The Economist this week writes about the increasing use of algorithms, for everything from book recommendations to running supply chains (Business by numbers, 13 Sep 2007). It suggests that algorithms are now pervasive.

The most powerful algorithms are those that do real-time optimization. They could help UPS recalibrate deliveries on the fly, and reorder airplane departure queues at airports to improve throughput. More down-to-earth applications include sophisticated calculations of consumer preference that end up predicting where to put biscuits on supermarket shelves.

If that’s true, the underlying fragility of algorithms is now pervasive. The fragility is not just due to the risk of software bugs, or vulnerability to hackers; it’s also a consequence of limitations on our ability to conceive of, implement, and manage very large complex systems.

The day-to-day use of these programs shows that they work very well almost all of the time. The occasional difficulty – from Facebook 3rd party plug-in applications breaking for mysterious reasons to the sub-prime mortgage meltdown – reminds us that the algorithmic underpinnings to our society are not foolproof.

In the same issue, the Economist reviews Ian Ayres’s book Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart about automated ddecision making (The death of expertise, 13 Sep 2007). According to their reading of his book, “The sheer quantity of data and the computer power now available make it possible for automated processes to surpass human experts in fields as diverse as rating wines, writing film dialogue and choosing titles for books.” Once computers can do a better job at diagnosing disease, what’s left for the doctor to do? Bank loan officers have already faced this question, and had to find a customer relations job. I used to worry about the employment implications; I still do, but now I also worry about relying on complex software systems.

Sunday, September 09, 2007

Software: complex vs. complicated

Homer-Dixon’s The Ingenuity Gap helped me realize that perhaps the difference between software and more traditional engineering is that bridges (say) are complicated, while software is complex. I follow the distinction I’ve seen in the systems theory literature that complicated refers to something with many parts, whereas complex refers to unpredictable, emergent behavior. Something complicated may not be complex (e.g. a watch), and a complex system might not be complicated (e.g. a cellular automaton).

A large piece of code meets the criteria for a complex adaptive system: there are many, densely interconnected parts which affect the behavior of the whole in a non-linear way that cannot be simply understood by looking at the components in isolation. A code module can be tested and defined individually, but its behavior in the context of all the other components in a large piece of code can only be observed – and even then not truly understood – by observing the behavior of the whole. If software were linear and simply complicated, extensive testing wouldn’t be required after individually debugged modules are combined into a build.

Some have compared software to bridges while calling (properly) for better coding practices. Holding software development to the same standards as other facets of critical infrastructure is questionable, however, because a software program is to a bridge as the progress of a cocktail party is to a watch. Both parties and watches have a large number of components that can be characterized individually, but what happens at a given party can only be predicted in rough terms (it’s complex because of the human participants) while a watch’s behavior is deterministic (though it’s complicated).

This challenge in writing software is of the “intrinsically hard” kind. It is independent of human cognition because it catch you eventually, no matter how clever or dumb you are (once you’re at least smart enough to complex software systems at all).

Geek’s addendum: Definitions of complexity

Home-Dixon’s definition of complexity has six elements. (1) Complex systems are made up a large number of components (and are thus complicated, in the meaning above). (2) There is a dense web of causal connections between the parts, which leads to coupling and feedback. (3) The components are interdependent, i.e. removing a piece changes the function of the remainder. (I think this is actually more about resilience than complexity.) (4) Complex systems are open to being affected by events outside their boundaries. (5) They display synergy, i.e. the combined effect of changes to individual components differs in kind from the sum of the individual changes. (6) They exhibit non-linear behavior, in that a change in a system can produce an effect that disproportional to the cause.

Sutherland and Van den Heuvel (2002) analyze the case of enterprise applications built using distributed object technology. They point out that such systems have highly unpredictable, non-linear behavior where even minor occurrences might have major implications, and observe that recursively writing higher levels of language that are supported by lower level languages, as source of the power of computing, induces emergent behaviors. They cite Wegner (1995) as having shown that interactive systems are not Turing machines: “All interactions in these systems cannot be anticipated because behavior emerges from interaction of system components with the external environment. Such systems can never be fully tested, nor can they be fully specified.” They use Holland’s (1995) synthesis to show how enterprise application integration (EAI) can be understood as a complex adaptive system (CAS).

References

Sutherland, J. van den Heuvel, W.-J. (2002). "Enterprise application integration encounters complex adaptive systems: a business object perspective.” HICSS. Proceedings of the 35th Annual Hawaii International Conference on System Sciences, 2002.

Wegner, P (1995). “Interactive Foundations of Object-Based Programming.” IEEE Computer 28(10): 70-72, 1995.

Holland, J. H. (1995). Hidden order: how adaptation builds complexity. Reading, Mass., Addison-Wesley, 1995.

Saturday, September 08, 2007

The Ingenuity Gap

Lucas Rizoli pointed me to Thomas Homer-Dixon’s The Ingenuity Gap, which sheds useful light on the hard intangibles problem. Homer-Dixon argues that there’s a growing chasm between society’s rapidly rising need for ingenuity, and its inadequate supply. More ingenuity is needed because the complexity, unpredictability, and pace of events in our world, and the severity of global environmental stress, is soaring. We need more ideas to solve these technical and social problems. Defined another way, the ingenuity gap is the distinction “between the difficulty of the problems we face (that is, our requirement for ingenuity) and our delivery of ideas in rsponse to these problems (our supply of ingenuity).”

Homer-Dixon brings a political scientist’s perspective to the problem, discoursing urbanely and at length (for almost 500 pages) on science, politics, personalities and social trends. He focuses on complexity as the root problem, rather than - as I have - on cognitive inadequacy. In my terminology, he concentrates on intrinsically hard problems rather than ones that are hard-for-humans. Complexity is hard no matter how smart we might be, which is why I’d call it intrinsic.

I’ve recently started thinking about complexity theory in an attempt to find new metaphors for the internet and the web, and it’s a telling coincidence that it’s coming up in the context of hard intangibles, too. It’s a useful reminder that while cognitively complexity limits our control of the world, a lot of it is instrinsically uncontrollable.