"Fast flux takes advantage of an option built into the Web's architecture, which allows site owners to switch the Internet Protocol address of a site without changing its domain name. The IP-switching function is meant to create more stability on the Web by allowing an overloaded Web site to switch servers without a hiccup. But cybercriminals using fast flux take advantage of the option to move the physical location of their malicious sites every few minutes, making them much harder to block or shut down."Another lesson, if any were needed, that all technologies are double-edged.
"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Friday, December 28, 2007
Internet stability technology exploited by hackers
Wednesday, November 07, 2007
European vs American regulation
From Brussels rules OK: How the European Union is becoming the world's chief regulator, The Economist, Sep 20th 2007:
The American model turns on cost-benefit analysis, with regulators weighing the effects of new rules on jobs and growth, as well as testing the significance of any risks. Companies enjoy a presumption of innocence for their products: should this prove mistaken, punishment is provided by the market (and a barrage of lawsuits). The European model rests more on the “precautionary principle”, which underpins most environmental and health directives. This calls for pre-emptive action if scientists spot a credible hazard, even before the level of risk can be measured. Such a principle sparks many transatlantic disputes: over genetically modified organisms or climate change, for example.
In Europe corporate innocence is not assumed. Indeed, a vast slab of EU laws evaluating the safety of tens of thousands of chemicals, known as REACH, reverses the burden of proof, asking industry to demonstrate that substances are harmless. Some Eurocrats suggest that the philosophical gap reflects the American constitutional tradition that everything is allowed unless it is forbidden, against the Napoleonic tradition codifying what the state allows and banning everything else.
The oil industry collects 51 cents in federal subsidies for every gallon of ethanol it mixes with gas and sells as E10
Some academics claim that ethanol takes more energy to produce than it supplies. This is contested; but even a UC Davis study says the energy used to produce ethanol is about even with what it generates, and that cleaner emissions would be offset by the loss of pasture and rainforest to corn-growing.
There's also a nasty little problem with E85: drivers apparently lose about 25% in fuel economy with E85.
Tuesday, November 06, 2007
Gardening the Web
I believe that it’s productive to represent the internet/web as a complex human system, but that’s an abstract concept that’s hard to grasp. A metaphor that everyone’s familiar with can enliven this idea: The internet/web as a global collection of gardens, and making policy is like gardening.
Just like a garden, the internet/web has a life of its own, but can be shaped by human decisions. A garden is neither pure nature, nor pure culture; it’s nature put to the service of culture. The “nature” of the internet/web discourse is its technology and commerce, separate from the “culture” of politics and policy. Few would claim that the internet/web should be left entirely to a laissez faire markets; it is also a social good, and some intervention is needed to protect the public interest.
Before delving further into the analogy between gardening and making communications policy, here is a summary of the properties of complex systems which apply to both:
- Hierarchy: systems consist of nested subsystems with linked dynamics at different scales
- Holism: the whole has properties different from collection of separable parts
- Self-Organization: systems organize themselves, and their characteristic structural and behavioral patterns are mainly a result of interaction between the subsystems
- Surprise and Novelty: one cannot predict outcomes of interventions with any accuracy; ny given model under-represents the system
- Robust Yet Fragile: a system is stable in a large variety of situations, but can break down unexpectedly
Just like the internet/web, there a many kinds of gardens. They vary in scale from window-sill planters to national forests, in governance from personal to public and commercial. Some objectives of gardening are utilitarian, and others aesthetic; some see gardens as primarily productive and others cultivate them for pleasure. Likewise, some see the internet/web as tool, and others as a source of meaning.
While most of the work in a garden is done automatically by the plants and other providers of ecosystem services, humans impose their desires regarding outcomes; similarly, internet/web innovation is driven by entrepreneurs and technologists according to their own agendas, though governments try to impose their will on the outcomes.
Just like the internet/web, managing a garden is often a futile matter; one can never know precisely how things will turn out. Plants that thrive in the garden next door inexplicably languish in yours. Plagues of pests and disease appear unexpectedly. Unexpected consequences abound. For example, people using imidacloprid to control grubs in their lawns may be causing the collapse of bee hives across North America (more).
Just like the internet/web, one can’t stop things coming over the fence from the neighbor’s garden. Birds, squirrels, slugs, and seeds don’t respect boundaries. A garden is embedded in a larger regional system, and its borders are porous. While every gardener can and should shape the garden to their preferences, there is a limit to their independence. The openness brings both plant-friendly bees and bird-chasing cats. Tension with neighbors is inevitable, but can be managed. There is management at many scales, from a gardener’s decision about what variety of tomato to plant for next year, to state-wide prohibitions on planting noxious weeds.
The old silos of traditional communications regulation are like formal gardens or regimented farming. Everything is neat and in its place. There is relatively little variety in the composition and output of the cultivation, and the managers are few and well-defined. Today’s internet/web is more like a patchwork of allotments and wilderness. Control is decentralized, and there is much more variety.
This description of the internet/web as a garden is of course incomplete – like any complex system, different perspectives of the internet will each reveal truths regarding that system that are neither entirely independent nor entirely compatible. The garden metaphor, built on the analogy of the internet/web as a complex system, captures a lot of the key dynamics. It fits with other place-based metaphors for the web (as a building, market, library, or public venue). There is a resonance with tool metaphors, since gardens are as a means to an end, whether pleasure or production. The link to the “internet as communications infrastructure” metaphor is less direct, but they don’t contradict each other.
Monday, October 08, 2007
Mind-boggling
For more, see USGS story, USGS satellite tracks, and APSN summary.
Thursday, September 27, 2007
Lobbying Pays
BusinessWeek concludes that on average, companies generated roughly $28 in earmark revenue for every dollar they spent lobbying. The top twenty in this game in $100 or more for every dollar spent. For context, the magazine provides these factoids: companies in the Standard & Poor's 500-stock index brought in just $17.52 in revenues for every dollar of capital expenditure in 2006.
In Gulfstream’s case, that exec jet deal was worth $53 million. It was just one of 29 earmarks valued at $169 million given to General Dynamics (its parent) or its subsidiaries that year; a nifty 30:1 ROI given that the company spent only $5.7 million on lobbying in 2004.
Wednesday, September 26, 2007
How statutes learn
It’s a truism that rewriting telecoms law is so hard that the US only managed to do it twice in the last one hundred years. But somehow the Congress and the regulatory agencies stay busy, and stuff changes around the edges.
I was suddenly reminded of Steward Brand’s wonderful book “How buildings learn”. (If you have any interest in either architecture or history, I strongly recommend it.) He espouses an onion-layer model of buildings. Quoting from the book:
Site - This is the geographical setting, the urban location, and the legally defined lot, whose boundaries and context outlast generations of ephemeral buildings. "Site is eternal."
Structure - The foundation and load-bearing elements are perilous and expensive to change, so people don't. These are the building. Structural life ranges from 30 to 300 years, though few buildings make it past 60, for other reasons).
Skin - Exterior surfaces now change every 20 years or so, to keep up with fashion or technology, or for wholesale repair. Recent focus on energy costs has led to re-engineered Skins that are air-tight and better-insulated.
Services - These are the working guts of a building: communications wiring, electrical wiring, plumbing, sprinkler system, HVAC (heating, ventilating, and air conditioning), and moving parts like elevators and escalators. They wear out or obsolesce every 7 to 15 years. Many buildings are demolished early if their outdated systems are too deeply embedded to replace easily.
Space Plan - The Interior layout: where walls, ceilings, floors, and doors go. Turbulent commercial space can change every 3 years or so; exceptionally quiet homes might wait 30 years.
Stuff - Chairs, desks, phones, pictures; kitchen appliances, lamps, hairbrushes; all the things that twitch around daily to monthly. Furniture is called mobilia in Italian for good reason.
Brand argues that because the different layers have different rates of change, a building is always tearing itself apart. If you want to build an adaptive structure, you have to allow slippage between the differently-paced systems. If you don’t, the slow systems block the flow of the quick ones, and the quick ones tear up the slow ones with their constant change. For example, timber-frame buildings are good because they separate Structure, Skin and Services; “slab-on-grade” (pouring concrete on the ground for a quick foundation) is bad because pipes are buried and inaccessible, and there’s no basement space for storage, expansion, and maintenance functions.
He quotes the architectural theorist Christopher Alexander: “What does it take to build something so that it’s really easy to make comfortable little modifications in a way that once you’ve made them, they feel integral with the nature and structure of what’s already there? You want to be able to mess around with it and progressively change it to bring it into an adapted state with yourself, your family, the climate, whatever. This kind of adaptation is a continuous process of gradually taking care.”
There seems to be an analogy to policy making. Some things that are almost eternal, just like Site: the regulatory imperatives like taxation, public safety, and economic growth. Legislative Acts are like the slowly-changing Structure and Skin. The trade-offs and compromises they represent are hard to build, and so they’re slow to change. Then we get to regulatory rulings made within the context of legislation, the working guts of applying laws to changing circumstances and fine-tuning the details – these are like Services and Space Plan, which change every 3 – 15 years. Finally, like the Stuff in homes that move around all the time, we have the day-to-day decisions made by bureaucrats applying the regulations.
This kind of model also gives a way to ask, restating Christopher Alexander slightly, “What does it take to craft legislation so that it’s really easy to make comfortable little modifications in a way that once you’ve made them, they feel integral with the nature and structure of what’s already there?”
I imagine that DC operatives do this instinctively – but perhaps an architectural metaphor could make the process even more efficient.
Thursday, September 20, 2007
Why we need stories
“Essentially, people often assume that an event with substantial, significant or wide-ranging consequences is likely to have been caused by something substantial, significant or wide-ranging.
“I gave volunteers variations of a newspaper story describing an assassination attempt on a fictitious president. Those who were given the version where the president died were significantly more likely to attribute the event to a conspiracy than those who read the one where the president survived, even though all other aspects of the story were equivalent.
“To appreciate why this form of reasoning is seductive, consider the alternative: major events having minor or mundane causes - for example, the assassination of a president by a single, possibly mentally unstable, gunman, or the death of a princess because of a drunk driver. This presents us with a rather chaotic and unpredictable relationship between cause and effect. Instability makes most of us uncomfortable; we prefer to imagine we live in a predictable, safe world, so in a strange way, some conspiracy theories offer us accounts of events that allow us to retain a sense of safety and predictability.”
Taleb’s account of our inclination to narrate is psychological: “It has to do with the effect of order on information storage and retrieval in any system, and it’s worth explaining here because of what I consider the central problems of probability and information theory. The first problem is that information is costly to obtain. . . The second problem is that information is also costly to store . . . Finally, information is costly to manipulate and retrieve.” (The Black Swan, his italics, p. 68)
He goes on to argue that narrative is a way to compress of information. I suspect that the compression is related to extracting meaning, not the raw information. The long-term storage capacity of the brain seems is essentially unbounded, but our ability to manipulate variables in short-term memory is very limited, to about four concurrent items. Stories provide a useful chunking mechanism: they’re pre-remembered frames for relationships. There is a relatively limited number of story shapes and archetypical character roles (cf. The Seven Basic Plots) in which cause and effect is carefully ordered and given meaning.
Taleb comes even closer to Leman when he connects the narrative fallacy with the desire to reduce randomness: “We, the members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is.” (The Black Swan, his italics, p. 69) As Leman says, “we prefer to imagine we live in a predictable, safe world.”
Friday, September 14, 2007
Algorithms Everywhere
The most powerful algorithms are those that do real-time optimization. They could help UPS recalibrate deliveries on the fly, and reorder airplane departure queues at airports to improve throughput. More down-to-earth applications include sophisticated calculations of consumer preference that end up predicting where to put biscuits on supermarket shelves.
If that’s true, the underlying fragility of algorithms is now pervasive. The fragility is not just due to the risk of software bugs, or vulnerability to hackers; it’s also a consequence of limitations on our ability to conceive of, implement, and manage very large complex systems.
The day-to-day use of these programs shows that they work very well almost all of the time. The occasional difficulty – from Facebook 3rd party plug-in applications breaking for mysterious reasons to the sub-prime mortgage meltdown – reminds us that the algorithmic underpinnings to our society are not foolproof.
In the same issue, the Economist reviews Ian Ayres’s book Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart about automated ddecision making (The death of expertise, 13 Sep 2007). According to their reading of his book, “The sheer quantity of data and the computer power now available make it possible for automated processes to surpass human experts in fields as diverse as rating wines, writing film dialogue and choosing titles for books.” Once computers can do a better job at diagnosing disease, what’s left for the doctor to do? Bank loan officers have already faced this question, and had to find a customer relations job. I used to worry about the employment implications; I still do, but now I also worry about relying on complex software systems.
Sunday, September 09, 2007
Software: complex vs. complicated
A large piece of code meets the criteria for a complex adaptive system: there are many, densely interconnected parts which affect the behavior of the whole in a non-linear way that cannot be simply understood by looking at the components in isolation. A code module can be tested and defined individually, but its behavior in the context of all the other components in a large piece of code can only be observed – and even then not truly understood – by observing the behavior of the whole. If software were linear and simply complicated, extensive testing wouldn’t be required after individually debugged modules are combined into a build.
Some have compared software to bridges while calling (properly) for better coding practices. Holding software development to the same standards as other facets of critical infrastructure is questionable, however, because a software program is to a bridge as the progress of a cocktail party is to a watch. Both parties and watches have a large number of components that can be characterized individually, but what happens at a given party can only be predicted in rough terms (it’s complex because of the human participants) while a watch’s behavior is deterministic (though it’s complicated).
This challenge in writing software is of the “intrinsically hard” kind. It is independent of human cognition because it catch you eventually, no matter how clever or dumb you are (once you’re at least smart enough to complex software systems at all).
Geek’s addendum: Definitions of complexity
Home-Dixon’s definition of complexity has six elements. (1) Complex systems are made up a large number of components (and are thus complicated, in the meaning above). (2) There is a dense web of causal connections between the parts, which leads to coupling and feedback. (3) The components are interdependent, i.e. removing a piece changes the function of the remainder. (I think this is actually more about resilience than complexity.) (4) Complex systems are open to being affected by events outside their boundaries. (5) They display synergy, i.e. the combined effect of changes to individual components differs in kind from the sum of the individual changes. (6) They exhibit non-linear behavior, in that a change in a system can produce an effect that disproportional to the cause.
Sutherland and Van den Heuvel (2002) analyze the case of enterprise applications built using distributed object technology. They point out that such systems have highly unpredictable, non-linear behavior where even minor occurrences might have major implications, and observe that recursively writing higher levels of language that are supported by lower level languages, as source of the power of computing, induces emergent behaviors. They cite Wegner (1995) as having shown that interactive systems are not Turing machines: “All interactions in these systems cannot be anticipated because behavior emerges from interaction of system components with the external environment. Such systems can never be fully tested, nor can they be fully specified.” They use Holland’s (1995) synthesis to show how enterprise application integration (EAI) can be understood as a complex adaptive system (CAS).
References
Sutherland, J. van den Heuvel, W.-J. (2002). "Enterprise application integration encounters complex adaptive systems: a business object perspective.” HICSS. Proceedings of the 35th Annual Hawaii International Conference on System Sciences, 2002.
Wegner, P (1995). “Interactive Foundations of Object-Based Programming.” IEEE Computer 28(10): 70-72, 1995.
Holland, J. H. (1995). Hidden order: how adaptation builds complexity. Reading, Mass., Addison-Wesley, 1995.
Saturday, September 08, 2007
The Ingenuity Gap
Homer-Dixon brings a political scientist’s perspective to the problem, discoursing urbanely and at length (for almost 500 pages) on science, politics, personalities and social trends. He focuses on complexity as the root problem, rather than - as I have - on cognitive inadequacy. In my terminology, he concentrates on intrinsically hard problems rather than ones that are hard-for-humans. Complexity is hard no matter how smart we might be, which is why I’d call it intrinsic.
I’ve recently started thinking about complexity theory in an attempt to find new metaphors for the internet and the web, and it’s a telling coincidence that it’s coming up in the context of hard intangibles, too. It’s a useful reminder that while cognitively complexity limits our control of the world, a lot of it is instrinsically uncontrollable.
Friday, August 31, 2007
Limits to abstraction
Booch mentions in passing on the Welcome page that “abstraction is the primary way we as humans deal with complexity”. I don’t know if that’s true; it sounds plausible. It’s definitely true that software developers deal with complexity this way, creating a ladder of increasingly succinct languages that are ever further away from the nitty gritty of the machine. While there are huge benefits in productivity, there’s also a price to pay; as Scott Rosenberg puts it in Dreaming in Code, “It's not that [developers] wouldn't welcome taking another step up the abstraction ladder; but they fear that no matter how high they climb on that ladder, they will always have to run up and down it more than they'd like--and the taller it becomes, the longer the trip.”
The notion of “limits to abstraction” is another useful way to frame the hard intangibles problem.
These limits may be structural (abstraction may fail because of the properties of a problem, or the abstraction) or cognitive (it may fail because the thinker’s mind cannot process it). In The Law of Leaky Abstractions (2002), Joel Spolsky wrote (giving lots of great examples) “All non-trivial abstractions, to some degree, are leaky. Abstractions fail. Sometimes a little, sometimes a lot. There's leakage. Things go wrong. It happens all over the place when you have abstractions. . . . One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to.”
There’s more for me to do here, digging into the literature on abstraction. Kit Fine’s book, The Limits of Abstraction (2002) could be useful, though it’s very technical – but at least there have been lots of reviews.
Tuesday, August 28, 2007
The Non-conscious
He reminds us of Benjamin Libet’s 1983 experiment indicating that decisions are made in the brain before our consciousness is aware of it:
“Using an electroencephalogram, Libet and his colleagues monitored their subjects' brains, telling them: "Lift your finger whenever you feel the urge to do so." This is about as near as we get to free will in the lab. It was already known that there is a detectable change in brain activity up to a second before you "spontaneously" lift your finger, but when did the urge occur in the mind? Amazingly, Libet found that his subjects' change in brain activity occurred 300 milliseconds before they reported the urge to lift their fingers. This implies that, by measuring your brain activity, I can predict when you are going to have an urge to act. Apparently this simple but freely chosen decision is determined by the preceding brain activity. It is our brains that cause our actions. Our minds just come along for the ride.”Frith recalls the Hering illusion, where a background of radiating lines makes superposed lines seem curved. Even though one knows “rationally” that the lines are straight, one sees them as curved. Frith uses this as an analogy for the illusion that we feel as if we are controlling our actions. To me, this illusion (and perhaps even more profoundly, the Hermann-grid illusion) points to the way different realities can coexist. There is no doubt that humans experience the Hering lines as curved, and that we see shadows in the crossings of the Hermann grid. Likewise, there is no doubt that many (most?) humans have an experience of the divine. The divine is an experiential reality, even if it mightn’t exist by some objective measures.
Other results mentioned include Patrick Haggard’s findings that the act of acting strengthens belief in causation; Daniel Wegner’s work on how one can confuse agency when another person is involved; and work by various researchers on how people respond to free riders; and Dijksterhuis et al’s work on non-conscious decision making, which I discussed in Don’t think about it.
Black Swans in the Economist
From On top of everything else, not very good at its job, a review of a history of the CIA by Tim Weiner:
“The CIA failed to warn the White House of the first Soviet atom bomb (1949), the Chinese invasion of South Korea (1950), anti-Soviet risings in East Germany (1953) and Hungary (1956), the dispatch of Soviet missiles to Cuba (1962), the Arab-Israeli war of 1967 and Saddam Hussein's invasion of Kuwait in 1990. It overplayed Soviet military capacities in the 1950s, then underplayed them before overplaying them again in the 1970s.”From The game is up, a survey of how the sub-prime lending crisis came about:
“Goldman Sachs admitted [that their investment models were useless] when it said that its funds had been hit by moves that its models suggested were 25 standard deviations away from normal. In terms of probability (where 1 is a certainty and 0 an impossibility), that translates into a likelihood of 0.000...0006, where there are 138 zeros before the six. That is silly.”
Saturday, August 25, 2007
Programs as spaces
The article’s mainly concerned with the organizational consequences of needing to "load the program into your head” in order to do good work. But I want to focus on the spatial metaphor. Thinking through a program by walking through the spaces in your head is an image I've heard other programmers use, and it reminds me of the memory methods described by Frances Yates in The Art of Memory. (While Graham does make reference to writing and reading, I don't think this is aural memory; his references to visualization seem more fundamental.)
I wonder about the kind of cognitive access a programmer has to their program once it’s loaded. Descriptions of walking through a building imply that moment-by-moment the programmer is only dealing with a subset of the problem, although the whole thing is readily available in long-term memory. He’s thinking about the contents of a particular room and how it connects with the other rooms, not conceptualizing the entire house and all its relationships at the same instant. I imagine this is necessarily the case, since short-term memory is limited. If true, this imposes limitation on the topology of the program, since the connections between different parts are localized and factorizable – when you walk out of the bedroom you don’t immediately find yourself in the foyer. Consequently, problems that can’t be broken down (or haven’t been broken down) into pieces with local interactions of sufficiently limited scope to be contained in short term memory will not be soluble.
Graham also has a great insight on what makes programming special: "One of the defining qualities of organizations since there have been such a thing is to treat individuals as interchangeable parts. This works well for more parallelizable tasks, like fighting wars. For most of history a well-drilled army of professional soldiers could be counted on to beat an army of individual warriors, no matter how valorous. But having ideas is not very parallelizable. And that's what programs are: ideas." Not only are programming tasks not like fighting wars as Graham imagines them; they're not like manufacturing widgets either. The non-parallelizability of ideas implies their interconnections, and here we have the fundamental tension: ideas may be highly interlaced by their nature, but the nature of the brain limits the degree to which we can cope with their complexity.