Friday, December 29, 2006

Is beautiful software always best?

Circularly polarized seven-segment antenna with hemispherical coverage devised by genetic algorithm, Altshuler and Linden, Journal of Electronic Defense, vol.20, no.7, p.50-52 (July 1997)
I had a fascinating conversation with Adam Sapek on Wednesday about software, aesthetics, and mental models. He posed this question: Do we use elegance (say) as design criterion because it necessarily leads to better software, or simply because it helps us to think about it better?

Aesthetic criteria seem to help humans make better software [1]. It is quite possible that a sense of what’s beautiful in software is common among all humans, just as there seems to be a neurological basis for visual aesthetics [2].

But what about software is developed by aliens, e.g. computers themselves? Genetic algorithms, for example, might yield a better solution, but one that is “ugly” in human terms. Genetic algorithms sometimes come up with “weird” and “counter-intuitive” solutions that are better than ours [3], such as unusual, highly asymmetric orbit configurations for multi-satellite constellations that minimize losses in coverage; unusually shaped radio antennas (illustrated above); and a solution to a problem in cellular automata theory that not only outperformed any of the human-created solutions devised over the last two decades, but that was also qualitatively different.

As Steven Vogel argues in Cats’ paws and catapults, natural design and human engineers arrive at very different solutions to the same problems, while relying on similar engineering principles. To pick a few examples: Right angles are rare in nature, but common in human technologies; nature builds wet and flexible structures, we go for dry and stiff; nature’s hinges mainly bend, and ours mainly slide.

It may turn out that our instinctive aesthetics for software will diverge from the systems on which our systems run. For example, concurrent programming seems to be very hard for humans; our criteria for “good software” may not work very well for many-processor architectures. We may be able to build tools that come up with good solutions, but we won’t have an intuitive grasp of why they’re good. Computer science may then find itself in a crisis of intelligibility like the one physics encountered over the interpretation of quantum mechanics a century ago.


[1] Don Knuth was an early proponent of aesthetics in software, as in his “literate programming” initiative. According to Gregory Bond in “Software as Art,” Communications of the ACM, August 2005/Vol. 48, No. 8, p. 118, Knuth identified the following as properties of software that inspire pleasure or pain in its readers: correctness, maintainability, readability, lucidity, grace of interaction with users, and efficiency. Charles Connell proposes that all beautiful software has the following properties: cooperation, appropriate form, system minimality, component singularity, functional locality, readability, and simplicity.

[2] Zeki and Kawabata (Journal of Neurophysiology 91: 1699-1705, 2004) found using functional MRI scans that the perception of different categories of paintings (landscapes, still lives, portraits) are associated with distinct and specialized visual areas of the brain, and that parts of the brain are activated differently when viewing pictures subjects self-identified as beautiful or ugly. For more papers on neuroesthetics, see

[3] These examples were taken from Adam Marczyk’s Genetic Algorithms and Evolutionary Computation. It provides a useful inventory of GA applications, seemingly unaffected by polemical goal of the paper to rebut claims of creationists about genetic algorithms and biological evolution.

RSS feed for my factoids site

If you wish to subscribe to Factoids Factoids, here's the link:

Some recent examples:

Eighty percent of owners buy holiday and birthday gifts for pets
Pet-item sales and services have the second-fastest growth for U.S. retailers after consumer electronics
Humans sequester one quarter to one half of all net terrestrial primary productivity to our use

I typically post one factoid a week, but it's bursty . . .

Thursday, December 28, 2006

A simple hard problem

Lord May of Oxford
The Royal Society recently podcast the 2005 President's Address by Lord May of Oxford (PDF). He argues that the key obstacle to solving vital problems facing the planet like global warming and the loss of biodiversity is the collective action dilemma: when everybody has to contribute a little to obtain a large collective benefit, it's in each individual's immediate best interest to cheat.

According to May, the reason why cooperative behavior evovled and is maintained is the most important unanswered question in evolutionary biology. We can see why in many countries' approach to climate change: they are not willing to incur the penalty of lost growth unless there is a guarantee that everyone will act in concert.

The politics of climate change or biodiversity is fiendishly complicated, and may merit the "wicked problem" label. However, the underlying issue is very simple: altruism is not in anyone's immediate self-interest.

Tuesday, December 26, 2006

Kinds of Hard

There are two kinds of people in the world: those who divide everything into two categories, and those who . . . um . . . you know . . . Well, I think it’s funny . . . . I guess I’ve been watching too much Daily Nut, where pretty witty young things start every show with jokey chit-chat.

Back to regular programming: My hypothesis is that the hardest problems are changing as we build an increasingly intangible world. The question is, what makes a problem hard? So as usual when I’m stuck, here’s a taxonomy.

Let’s break problem solving into three stages:
  1. Framing the problem
  2. Solving it
  3. Validating the solution
For each of these stages, one can identify degrees of difficulty. A difficulty in just one of the stages can make the overall problem hard. For example, the problems of board games like go and chess are easy to frame, and it’s clear when a solution has been reached; however, playing the game is hard. One can mix and match the degrees of difficulty in each stage to build a very large number of problem types.

If you know of situations that don’t fit into this scheme, Dear Reader, please let me know. I’ll be looking out for them, too.

1. Framing

In the easiest case, a problem can be clearly stated (e.g. game rules). It gets harder when the problem statement evolves as the solution emerges (e.g. wicked problems). There are cases where it’s not agreed that a problem exists (e.g. global warming until recently). And then there are cases where you don’t even know there’s a problem (e.g. Donald Rumsfeld’s “unknown unknowns”).

2. Solving

Some problems are easy to solve, like tic-tac-toe. Others have multiple solutions (e.g. teaching kids to read). Even mathematically hard problems can have millions of solutions (e.g. superstring theory). Many problems are soluble, but may or may not be hard (e.g. optimization). Some cannot be solved at all (e.g. calculating Omega, Chaitin’s halting probability). In some cases it’s not clear how to even start solving the problem, let alone what path to follow (e.g. speculative software projects). In others, humans think they instinctively know the answer, but in fact get it wrong (e.g. mistakes we make interpreting statistics).

3. Validating

Checking the acceptability of a solution may be a trivial mechanical matter even though finding the solution in the first place is hard (e.g. factorizing primes). Some problems may have many candidate solutions, but no way to know in advance which one is right, or best (e.g. improving the situation in Iraq). There may be no way of knowing whether an action has solved a problem (e.g. building a road to address congestion). There may be no agreement in a community about whether what’s been presented is a valid solution (e.g. the computer proof of the four-color theorem).

Some other approaches

There are many ways of categorizing problems, and I’m just beginning my collection.

Nancy Roberts works in the “wicked problem” tradition and defines three types of problems in “Wicked Problems and Network Approaches to Resolution” (International Public Management Review Vol. 1, No. 1, 2000):
  • Simple: there is consensus on a problem definition and solution
  • Complex: although problem solvers agree on what the problem is, there is no consensus on how to solve it.
  • Wicked: problems with four characteristics – no consensus on the definition of the problem; a vast and diverse group of stake holders; constraints on the solution are constantly shifting; and no consensus on the solution of the problem.
Another distinction goes all the way back to Aristotle, dividing problems between those susceptible to formal logic, and those needing argumentation. There are differences in method between the two, but also differences in validation. Argumentation is made before a specific audience, and the case succeeds if the audience is persuaded. Logic’s audience is hidden, encoded in its rules and the criteria of valid proofs. The difference between logic and rhetoric is reminiscent of the distinction I drew between “plain ol’ hard” and “human-hard” problems, but is not quite the same: some problems of logic, e.g. in statistics, are human-hard.

In computer science, problem hardness is often measured by the time it takes to reach a solution as a function of the problem size, n. If the time taken is a polynomial function of the problem size, e.g. order(n) or order(n^3), the problem is said to be in the complexity class P. There is a hierarchy of such classes. Problems in the EXPTIME class are solvable by a deterministic Turing machine in time order(2^p(n)), where P(n) is a polynomial function of n. P is a proper subset of EXPTIME. The class NP contains decision problems where solutions can be verified (but not necessarily found) in polynomial time: P {subsets} NP {subsets} EXPTIME. The hardest problems in NP are NP-complete. For many NP-complete problems, typical cases are actually easy to solve; Cheeseman, Kanefsky and Taylor showed that such problems can be summarized by an “order parameter,” and that the hard problems occure at a critical value of such a parameter (“Where the Really Hard Problems Are,” Proceedings of the IJCAI-91).


As always, many thanks to the many people who’ve helped me think about this topic. In this case I’m particularly grateful to Miladin Pavlicic for helping me understand some different kinds of software project challenges.

Monday, December 25, 2006

Happy Holidays, Dear Reader

Thank you for reading, and thank you for your feedback, both in the blog comments, and privately.

May you and your loved ones be healthy, happy, and peaceful


Sunday, December 24, 2006

Scoundrels really are dirty

Wash away your sins. An era of moral decay. Keep your nose clean. Corruption. There are endless examples of metaphors equating morality with cleanliness, and vice with filth. Business school researchers at Toronto and Northwestern have found that this is not just a matter of language; being associated with something immoral leads to an urge to physically clean oneself.

Zhong and Liljenquist have conducted three studies of the “Macbeth effect,” the need to cleanse oneself after a threat to one’s moral purity (Science 8 September 2006: Vol. 313. no. 5792, pp. 1451 – 1452; abstract). New Scientist of 7 September 2006 describes the experiments.

There seems to be an active mental mapping between morality and cleanliness. Ethics is abstract, and we activate our corporeal instincts when thinking about it. Perhaps this is the only way we can think of morality in an extensive way. If that’s true, then ethical concepts that can’t be modeled physically cannot exist.

I don’t know if one can disprove this hypothesis. A candidate concept has evidently been thought of; it then remains to show that there is no physical correlate. However, language is pliable enough that one will always be able to draw link to a bodily metaphor. Whether this is persuasive will be a subjective judgment. Perhaps neural mapping and brain imaging will eventually be able to help.

Tuesday, December 19, 2006


I suspect that the most important thinking happens at the edge of intelligibility. It would help to define terms. As a first step, I’d mark a few points on the continuum between clear understanding and incoherent perplexity.

I’ve found it useful to think in terms of known / understood / intelligible. Each of these has three states:
  • Known, not known, not knowable
  • Understood, not understood, not understandable
  • Intelligible, not intelligible, not intelligibuble
Whether something is known is largely a matter of fact: You know what happened in the ballgame last night, and I don’t; the result of tomorrow’s game is unknowable today. A matter might be up for debate: the ivory-billed woodpecker, a possibly extinct bird, may or may not be present in the Florida Panhandle. These facts may or may not be known, but there is little argument that they’re unknowable.

Some things are not knowable, in terms of a given system of thought. For example, Heisenberg’s uncertainty principle in quantum mechanics states that increasing the measurement accuracy of a particle’s position increases the uncertainty of a simultaneous measurement of its momentum. If its position is exactly known, its momentum is unknowable.

Something is not understood when there isn’t a compelling answer to a “Why question.” When an explanation is persuasive, a phenomenon is deemed to be understood. We understand why the sun moves through the sky because we accept an explanation about planetary motion that involves Newtonian mechanics and a disposition of the sun and the earth. Understanding is a matter of argumentation; it’s subjective. At least one human is necessarily involved, and usually a community decides whether it understands a process, that is, whether the explanation meets the standards of that community.

When the terms of an explanation exceed one’s grasp, something is not understandable. Religious mysteries fall into this category; the Holy Trinity is not understandable in logical terms. A more mundane version occurs when an individual or group doesn’t have the contextual knowledge that supports an explanation; string theory is not really understandable to those without the requisite knowledge of advanced mathematics. However, I will place such cases in the category “not intelligible.” (Agreed, this taxonomy isn’t water-tight.)

By intelligible I mean something that can be apprehended in general terms. One may not grasp all the steps of an explanation, but the overall shape is familiar. Any book written in English is to some extent intelligible to an English speaker. (Translations of French postmodernists don’t count.) If something is not intelligible, not only do you not understand it – you’re not even sure what the topic of discussion is. Many intellectuals might find it unintelligible that Francis Collins, leader of the human genome project, is both a devout Christian and a scientist.

When there is no possibility of making something intelligible, it’s “not intelligibuble”. Philosopher of mind Thomas Nagel famously argued that because consciousness has an irreducibly subjective component, we will never know what it’s like to be a bat. That experience is not intelligibuble to a human.

Monday, December 11, 2006

A lower bar

Some kind readers noticed that this blog is back after a hiatus. (There are readers!) Roy Peter Clark’s advice helped me get going again. In "Writing Tool #33: Rehearsal," he quotes the poet William Stafford:

I believe that the so-called "writing block" is a product of some kind of disproportion between your standards and your performance ... One should lower his standards until there is no felt threshold to go over in writing. It's easy to write. You just shouldn't have standards that inhibit you from writing.

In short: If you have writer’s block, your standards are too high.

When I got stuck last year, I was helped by Anne Lamott’s advice in Bird by Bird to just bang out “a shitty first draft.” This time around the problem has been getting to a half-decent second draft. Until I figure that out, I’ll take William Stafford’s advice and keep lowering the bar. Sometimes, especially on the web, the shitty first draft just has to make its own way in the world.

Saturday, December 09, 2006

Testing theories of architectural intelligibility

Architecture involves designing spaces that are intelligible: discerning the purpose of a structure, being able to find the entrance, knowing how to get around. Courses in architecture aim to impart the theory and practice of designing intelligible spaces.

3D simulation software is commonly used to construct building models, and can be used to test whether people moving through them can, in fact, make sense of them.

Rather than just test the building for intelligibility, one can test the underlying theories by creating virtual spaces that instantiate them, with a “volume control”: an experimenter can adjust the degree to which a rule is implemented to find the point at which a user can now longer make sense of a building.

Think of it as usability testing meets architectural theory (feng shui, Christopher Alexander, New Urbanism, and on and on).

By the “if I can think of it, someone’s already built it” rule, it’s certain that this has already been done. If you know of examples, Dear Reader, please let me know.

The expensive way would be to use commercial architecture design packages; a quick and dirty approach could use Second Life. The challenges include (1) extracting variable-based rules from architectural design principles, and (2) building the volume control functionality.

Tuesday, December 05, 2006

The edge of intelligibility

I suspect that the most important thinking happens at the edge of intelligibility, between triviality and incoherence, just as it is said that the most complex structures exist at the edge of chaos, between order and complete randomness. Bear with me while this post wanders the borderlands of intelligibility. Hopefully that means it’s a worthwhile topic...

We only argue about things that are uncertain; otherwise, there’s no point in having a debate. Argumentation not only allows participants to test their reasoning and persuade others, but can also lead to new insights. This is particularly true in complex, “wicked” problems where a question can only be grasped by attempting to fashion a solution.

If a discussion involves the question “But what do you mean by X?” it’s probably on the edge of intelligibility. Social debates thrive in this zone. For example, what does “life” mean in the phrase “Life begins at conception?” The abortion debate hinges on when organic matter becomes a human being. This is a very complex question where any answer raises question about the meaning of the term “human” (at least for non-partisans).

Or: what does “information overload” mean? There’s more information today, but are we more overloaded than our forebears? How would we know? The concept is so broad that we probably can’t measure information overload today, let alone estimate it for past generations. Adam Gopnik argues in “Bumping into Mr. Ravioli” [1] that no-time-to-meet-your-friends busyness is a very modern affliction. Samuel Pepys, a very busy man, never complains of busyness. Gopnik contends that until the middle of the nineteenth century, the defining bourgeois affliction was boredom, not frenzy. Perhaps they had information underload... regardless, both ennui and overload live under the sign of meaninglessness, and thus at the edge of intelligibility.

We oscillate between excitement and boredom because we crave both novelty and predictability. When we have predictability, we become bored and seek novelty; as soon as we’re stimulated, we become agitated and seek refuge in predictability. This experience is probably common to all animals since it’s a good heuristic for finding food and staying safe.

Likewise, we seek both perplexity and reassurance. When it swept the world, sudoku was a two-fer: a perplexing novelty. The daily news is another two-fer: a ritual reassurance that the world hasn’t changed, even as it changes. Derek Lowe points out that there are a number of news templates that are used over and over again, like How The Mighty Have Fallen, or Just Like You Thought. As we alternate between perplexity and reassurance, we skate on the edge of intelligibility. Journalists are very good at giving readers just as much information as they can deal with, and then a little more, adding a pinch of perplexity to the comfort food of understanding.

Donald Rumsfeld provided a multi-layered lesson in intelligibility during a Department of Defense news briefing on Feb. 12, 2002. Since his pronouncement is sometimes edited for clarity [3], here’s what the transcript says:

“Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend [sic] to be the difficult

Rumsfeld’s model sums up the context in which decision makers operate. “Tops” like the Secretary of Defense not only inhabit a world of complexity and responsibility, as Barry Oshry would have it, but also live in the perpetual twilight between the known and unknown.

The cusp between competence and risk is another productive margin. Mihaly Csikszentmihalyi argues in Flow: The Psychology of Optimal Experience that we find most productive and rewarding states of being when a challenge tests but does not exceed our skill. For intellectual performances, the search for flow will take us to the edge of intelligibility.

The edge of intelligibility is at different places for different people. Someone who can grasp a theorem in trigonometry in a glance might struggle to make sense a situation on the football field; a good quarterback might have trouble reading the motives of people around a meeting table. Since there are so many cognitive competencies [2], there is always justification for anybody to feel those around them are ignoramuses, or to feel that they are out of their depth compared to the expert next to them.

The edge of intelligibility is a subjective question that involves personal expertise and communal standards. Peter Dear’s wonderful new book The Intelligibility of Nature: How Science Makes Sense of the World argues that Newton’s Principia was not considered to be valid natural philosophy by many leading scientists because it did not provide a mechanical explanation of how gravity worked. Getting the right answer wasn’t sufficient to qualify as science; it had to provide a meaningful explanation, too.

The question of intelligibility is related to my pursuit of hard intangibles, but I’m not yet sure exactly how. A problem must be recognizable as such to be tractable, and is thus intelligible to a certain extent. However, it’s hard because so much about it is difficult to grasp. Hopefully the pursuit of puzzlement will eventually lead to more clarity!


[1] “Bumping into Mr. Ravioli” by Adam Gopnik. First published in the New Yorker, September 30, 2003. Reprinted in The Best American Essays 2003.

[2] See e.g. Howard Gardner’s work on Multiple Intelligences. He argues that there are seven distinct intelligences: linguistic; logical-mathematical; spatial; musical; bodily-kinesthetic; interpersonal; and intrapersonal. Each person has a different mix of these skills

[3] Many poked fun at the Secretary for this formulation. It’s true that he was dodging a question about the lack of evidence of a direct link between the Iraqi government and terrorist organizations, and his reply is more than a little convoluted. But to his credit, it’s not easy to discuss epistemology using a 2x2 matrix in a couple of sentences in a live interview. He left out the entry “unknown knowns,” that is, things that you know without realizing that you do. An example from the War on Terror might be when an organization has important information tucked away in a regional office but top executives aren’t aware of it. I wouldn’t be surprised if this fourth quadrant had been discussed at the Pentagon.

Saturday, November 25, 2006

Fad Fading

“If you want to pick a fight with a free-market economist,” starts a BusinessWeek story [1], “say something nice about the minimum wage.”

I’ve been up against some heavyweight economists about the benefits of unlicensed spectrum uses like Wi-Fi. For most if not all mainline economists, markets are the best way to allocate spectrum; unlicensed, in their view, wastes society’s resources. It’s therefore reassuring to see a story that shows how economic opinion evolves – even though my argument for some unlicensed allocations is based on regulatory balance rather than economic analysis.

Like all conclusions, even ones from the hard sciences, the received wisdom of economics shifts over time. Since the economics used in policy debates belongs more in the Humanities than the Sciences, it shouldn’t surprise anyone when “the obvious” changes – no more than when a food type or lifestyle choice goes from healthy to dangerous and back again.

The punch line of the BW story, and this post:

“But the economics profession is far less united against the minimum wage than it was a generation ago. Since the early 1990s an influential group of economists has poked holes in the once strongly held belief that the minimum wage is a major job killer. And now there's economic research disputing the rest of the conventional wisdom. Some economists are saying that minimum-wage increases have a ripple effect, bumping up the pay of a large portion of the working poor.”

Caveat: the main evidence cited comes from a labor-supported Washington think tank. Cue FX: the sound of an axe being ground. I guess I shouldn't hold my breath for free-market economists to change their minds...


[1] “More Ammo For A Higher Minimum,” BusinessWeek, 27 Nov 2006,

Friday, November 24, 2006


New Scientist reports that a Cornell scientist is predicting that solar flares will start drowning out GPS signals by 2011 (Solar flares will disrupt GPS in 2011). The researchers say the problem has escaped detection before because GPS systems have spread in popularity during a time of relatively low solar activity.

A neat little example of our the consequences of assuming that the future will be like the present.

Thursday, November 23, 2006

Crossing the Curve

One can buy a decent home PC system for $330 (Dell Dimension E521, checked 11/23/06). Windows Media Center systems start at $360.

The cheapest Windows Vista operating system will retail for $200 (“Home Basic”). An upgrade is $100. (Summary on GMSV.) The Vista replacement for Windows XP Home (“Windows Vista Home Premium”) is $240; an upgrade $160.

PC hardware prices have been steadily coming down for years. Home PCs used to cost $1,500; then $1,000; and now well below $500. The price of Windows has stayed constant.

We’re close to the long-expected inflection point where the Windows operating system costs more than the PC hardware it runs on. It’s an inflection because I expect that hardware prices will continue to drop, while Microsoft will continue to try to maintain the price of the operating system. Microsoft argues that new software features will keep the price of hardware constant, since users will need more fancy systems; I’m not persuaded. Software functionality increases linearly, while hardware price/performance drops exponentially.

The implications are significant. As hardware becomes a smaller and smaller part of the package price, there will be downward pressure on the price of the operating system. If Microsoft cannot resist this, it will erode Microsoft’s mythical profit margins.

Open source software will increase the pressure. As hardware becomes essentially free, the attractiveness of a free operating system grows. Linux is quietly making progress on the desktop. While Linux-on-the-PC is still only geek amusement, the user experience is steadily improving. Its market share growth could be as rapid and substantial as that of Firefox. I’m hearing more and more anecdotes about Ubuntu. For example, take this user report about switching from Windows:
“I installed the Ubuntu Linux 6.10 (Edgy Eft) distribution that I downloaded and burnt to CD on my desktop. Wow. That’s all I can say - Wow. Ubuntu installed like a dream in less than 30 minutes, and everything just worked. My wireless card worked, power management worked, and the DVD burner worked with no tweaking, fidgeting, or fussing.”

Apple’s model looks increasingly attractive. It knows how to make money selling hardware. It sells what Tren Griffin calls “software in a box.” (Bill Gurley got the idea from Tren.) It can flex its system price below the combined software + hardware price in the PC ecosystem, putting pressure on both hardware vendors and Microsoft. Apple has the additional advantage that Mac OS X has an open-source core; it only needs to invest in developing value-added software, whereas Microsoft has to maintain the whole enchilada.

As the hardware price continues to drop below the operating system price, Microsoft will face increasing pressure to sell a “Microsoft PC,” with all that entails for its relationships with hardware vendors and its operating margins. It’s been learning how to do this for some time. The X-Box is “software in a box,” and Zune is a clone of the iPod model. (Imitation is the sincerest form of flattery.)

(Thanks to Brad G. for the fascinating conversation that inspired and informed this post.)

Monday, November 20, 2006

Being a mayfly

Everyone we see gets old faster than we do. Our parents, not to mention our children, age before our eyes. The self behind our eyes doesn’t age as fast as human bodies do, so we see other people aging past us.

The animals around us age more rapidly than we do. Pets go from puppies to tired old dogs while we feel hardly a change. It’s a matter of life span: humans live longer than most animals. We don’t encounter the exceptions, like the Galapagos tortoise which lives for more than 150 years.

What’s it like to be, say, a dog that ages and dies while its human companions hardly seem to change?

One can get a sense by looking at the world around us, which changes on geological time – Gaia, if you will. A cycle of the seasons is like the world breathing in and out. Our lives flash by in just a few of Gaia’s breaths.

Saturday, November 11, 2006


Network neutrality nightmare scenarios are largely hypothetical: Bad thing X could happen if a network operator did Y. Activist outrage and some media attention is one reason we’re still in the realm of conjecture. Even if network operators were minded to do Y, they'd rather not draw the spotlight. How to keep the klieg lights shining?

Since discrimination, in the neutral sense of making distinctions, can have both good and bad outcomes [1], regulations will also have to make subtle distinctions. It’s even harder to make law about subtle hypotheticals. I’m therefore inclined against detailed legislation in advance of facts. Legislation, if any, can lower the risk of unintended side effects by simply establishing principles that a regulatory agency can apply to alleged bad behavior. But agencies don’t have the means to gather data. Even companies with an interest in the matter operate in the dark; I’ve heard that Vonage found out about the discrimination against their VOIP service in the Madison River case only by accident. How to find bad behavior?

The answer is millions of volunteer PCs on the net sending test payloads to each other to characterize what happens to traffic on the net. This would yield an inventory of how different kinds of traffic are carried across all the different segments of the net. Software could identify potential problems, and volunteer people could scan them to identify cases that need more attention.

Anybody could download small applications that would run tests when they’re not using their machines. Just as SETI@Home uses spare CPU cycles to search radio data for the signature of extra-terrestrial intelligence, so this code would use space cycles and spare bandwidth to search for signatures of net neutrality exceptions. SETI for the net… NETI@Home: Neutrality Exception Tracking Infrastructure.

NETI@Home would use peer-to-peer technology like bittorrent; there would be no single repository of data, and no central organization directing the work. People could select the kinds of test payload they want to run; to make it easy for the majority, various recognizable organizations might recommend payload sets, or one could find popular sets on sites like digg. Volunteers would write the software, and parse the results. The tests could change quite quickly to meet changing network operator strategies, while the underlying software would change more slowly.

Citizens and regulators need good data if they’re to get the best out of the Internet. Finding the balance Jon Peha describes between allowing discrimination that benefits users, and preventing market power abuses, will be a lot easier with real-time tracking of network operator behavior.


[1] Jon M. Peha, “The Benefits and Risks of Mandating Network Neutrality, and the Quest for a Balanced Policy,” 34th Telecommunications Policy Research Conference, Sept. 2006, at

Monday, November 06, 2006

Colonialist Misconceptions

The Great Powers’ carve-up of the Middle East in 1914-1922 is a sorry tale of misconceptions that range from poor intelligence up to cross-cultural ignorance, wonderfully told in David Fromkin’s A Peace to End All Peace: The Fall of the Ottoman Empire and the Creation of the Modern Middle East. (I’m indebted to Michael Kleeman for giving me this book.) I’m particularly interested in the cultural biases since they show the power of hidden mental models at work.

In summer of 1914, Britain was looking for a way to undermine the Ottoman Empire. The Sultan was the Caliph of Islam, and the British were worried that he would use this position to sow discontent among the many Moslems in India, including the disproportionately large Moslem part of the Indian Army.

They decided to offer the Caliphate to Hussein, the Emir of Mecca, because they believed that whoever controlled the person of the Caliph, Muhammad’s successor, controlled Islam. (They stumbled into the notion of Arab nationalism as a front for British power, but that’s a different part of Fromkin’s story.) Hussein was plausible candidate, the British thought, because he was the guardian of the Moslem Holy Places.

Hussein read Britain’s approach as an offer to make him the ruler of a vast kingdom, which is what the caliphate signifies. The British were surprised when he replied to their overtures by asking for details of the additional kingdoms he would gain. The core British misconception was that the split between temporal and spiritual authority that pitted the Pope against the Holy Roman Emperor in medieval Europe was alien to Islam.

Fromkin summarizes the story thus: “The British intended to support the candidacy of Hussein for the position of “Pope” of Islam – as position that (unbeknown to them) did not exist; while (unbeknown to them too) the language they used encouraged him to attempt to become ruler of the entire Arab world – though in fact [Ronald Storrs, a leading British bureaucrat] believed that it was a mistake for Hussein to aim at extending his rule at all.”

The relationship between church and state is different in every culture, and is a consequence of its history. However, it is so deeply embedded in cultural consciousness that it goes without saying. More importantly, it goes without thinking. The assumption is revealed – if one is lucky; too often it remains a hidden source of misunderstanding – when someone of another culture behaves in an unexpected way.

Sunday, November 05, 2006

The “marrying out” gene

My father came from good Protestant Afrikaner stock, so it came as a surprise to his family when he married an English-speaking Catholic woman. His children held true to form; even though we grew up in an Afrikaans town, both his sons and his daughter married English-speaking people. In my case, I married an English woman.

There’s evolutionary advantage for children to marry in the same way their parents did – something must’ve worked, since the parents evidently reproduced successfully – but in this case I suspect there’s a more specific trait.

A recessive gene for marrying out could be strengthened by Like Loves Like. When I look at my in-laws, there’s quite a lot of the same behavior at one or even two removes. In my brother’s wife’s family, one sister married an Afrikaner, one married an American, and one a German. My wife’s brother married a Japanese woman; and her brother married a Chinese. If there is a genetic basis for xenophilia, there’s a cross-cultural minority that’s doing its best to keep it strong.

Wednesday, November 01, 2006

Giving feels Good – It’s Official

The Economist reports that anonymous benevolence makes people feel good [1]. Researchers at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland used MRI to explore the neurological basis of charity [2].

They found that a variety of brain centers were involved. Donating money activated the brain center associated with sex, money, food and drugs – the mesolimbic pathway, mediated by dopamine. The warm glow of giving is the same as some other warm glows. Donating also engaged the part of the brain that plays a role in the bonding behaviour between mother and child, mediated by the hormone oxytocin.

Making complex trade-offs where self-interest and moral obligations conflict activated yet a third part of the brain, the anterior prefrontal cortex, which is thought to be unique to humans. It seems that giving makes many animals feel good, but grappling with ethical dilemmas seems to be part of what makes us human.


[1] The joy of giving, Economist, 14 October 2004, at (subscription may be required)

[2] Jorge Moll, Frank Krueger, Roland Zahn, Matteo Pardini, Ricardo de Oliveira-Souza, and Jordan Grafman, “Human fronto–mesolimbic networks guide decisions about charitable donation,” PNAS 2006 103: 15623-15628, at

Monday, October 30, 2006

About 8% of Germans consume 40% of all the alcohol sold in the country

Der Spiegel's Berlin editor Gabor Steinhard has a new book on wealth and power that is being excerpted in his paper. On October 26, he deals with a new underclass that is nurtured by the (welfare) state while threatening its future.

An underclass that has an income comparable to those of police officers, warehouse workers and taxi drivers seems peculiarly German. I'm not sure to what extent Steinhard's concerns would apply to Anglo-Nordic countries, broadly defined. However, his depiction of a proletariat that is impoverished intellectually rather than physically is striking.

The underclass watch a lot of television, consume large amounts of fatty foods and alcohol, and have no interest in education. In a knowledge worker world where welfare states shrink in the face of Asian competitors, the future of the proletariat is bleak - and the risks for social upheaval significant.

Sunday, October 29, 2006

Policy implications of the Cognitive Radio metaphor

Practitioners who use the cognitive radio metaphor understand that radios are not conscious in the ways that humans are; they use this term to inspire research, the results of which then stand on its own merits, separately from the guiding metaphor.

However, policy and business decision makers may not have such an informed and subtle perspective. “Cognitive radio” is such a powerful image that they may be tempted to take the metaphor at face value, and make inferences that reach beyond the bounds of the model.

For example, they may infer that cognitive radio systems will be as flexible and sophisticated as the cognitive system they know best: human beings. This inflates expectations. The “AI winter” of the late 1970s was due, in part, to the dearth of promised results despite substantial funding since the mid-1950s. While one cannot directly compare a new endeavor like cognitive radio with the relatively mature AI field in the Seventies, the lesson of expectation management is an obvious one.

The terminology may also lead decision makers to over-estimate the risks of cognitive radio. The Radios-as-People model not only entails that the devices will be smart, flexible, and intelligent; it also intimates that some will be malicious, devious, malevolent, deceitful, treacherous, untrustworthy, dangerous or destructive. Cognitive radio technology may be unfairly judged to be a severe security risk simply on the basis of the connotations its name evokes.

The “radio” part of the conceptual blend invites regulation where it might otherwise not be contemplated. Regulators have taken a largely hands-off approach to computing. However, governments have been regulating radio for almost a century; they are not only comfortable doing so, many feel it is their duty. While the blend of “cognitive” and “radio” may bring computing’s unregulated regime to wireless, it may conversely bring radio regulation to computing.

“Cognitive” implies informed behavior; cognitive radio thus raises the question of regulating radio behavior. Radio regulation to date has specified parameters so simple that the term “behavior” is scarcely merited. This regulation has applied almost exclusively to transmitters; receiver standards have been rare, and those that exist specify passive parameters like selectivity and spurious rejection [1]. Regulating cognitive radio thus raises challenges for agencies with scant experience with receiver standards, let alone software behavior verification.


[1] National Telecommunications and Information Administration of the U.S. Department Of Commerce (2003) , Receiver Spectrum Standards, Phase 1 – Summary of Research into Existing Standards, NTIA Report 03-404, at

Friday, October 27, 2006

Cognitive Radio as Metaphor

Cognitive radio is the integration of software radio and machine intelligence. The term, coined by Joe Mitola, describes a radio communications paradigm where a network or individual nodes changes communications parameters to adapt to a changing environment that includes both external factors like spectrum usage, and internal factors like user behavior. It blends the promise of artificial intelligence with the excitement of software-defined radio. Science often advances through the exploration of inspired metaphors like this one.

Here are some examples of the language of Cognitive Radio, taken from a review article by Mitola and Maguire [1]:
Part of the processor might be put to sleep. The radio and the network could agree to put data bits in the superfluous embedded training sequence. A radio that knows its own internal structure to this degree does not have to wait for a consortium. Cognitive radio, then, matches its internal models to external observations to understand what it means to commute to and from work. A cognitive radio can infer the radio-related implications of a request. The radio also warns the network.

Computing research is often informed by anthropomorphism, which is obvious here. The anthropomorphism fits easily with the dominant mental model of wireless communications, described in Mental models for wireless spectrum:
  • Spectrum as Land
  • Signals as Moving Objects, or as Sound
  • Radios as Sentient Agents
Here are some (lightly edited) examples from Mitola and Maguire, and Haykin [2]:

  • The electromagnetic radio spectrum is a natural resource (Spectrum-as-Resource)
  • The underutilization of the electromagnetic spectrum leads us to think in terms of spectrum holes (Spectrum-as-Resource, Spectrum-as-Space)
  • White spaces, which are free of RF interferers except for ambient noise (Spectrum-as-Space, Signals-as-Sound)
  • The stimuli generated by radio emitters (Radios-as-Agents)
  • In some bands, cognitive radios will simply compete with one another. (Radios-as-Agents)
  • The game board is the radio spectrum with a variety of RF bands, air interfaces, smart antenna patterns, temporal patterns, and spatial location of infrastructure and mobiles. (Spectrum as Space)
It’s a small step from Radios as Sentient Agents to Radios as People. The “cognitive radio” moniker works in part because the groundwork for the concept has been laid by the model of radios as sentient agents.

Cognitive radio technology presumes that networks as well as individual radios have intellective behavior, as in “The network then knows that this user ...” The image of a sentient network does not fit into the existing model as readily as that of a sentient radio. One can thus expect that lay people will not take up the concept of cognitive network as readily. To the extent that the default spectrum/signals/radios model determines the scope of wireless regulation, networks (as distinct from the radios that comprise them) will fall beyond the ambit of regulators.

Cognitive radio thinking has some affinities with another mental model, “Wireless Communications as Internet.” This model is more recent and less well developed than the dominant spectrum/signals/radios model; it has been advanced by Open Spectrum advocates like Benkler, Werbach, and Weinberger [3]. Its salient elements are open standards, information transport, and decentralization of control. Since cognitive radio work focuses on system design, and the figure/ground relationship of devices in spectrum is less important than system architecture than in earlier, more passive technologies, the fit with the Wireless-as-Internet model is not surprising.

Even though Cognitive Radio is used by opponents of the traditional approach to argue for a complete rethinking of spectrum regulation, it is premised on the very structures they want to remove. The radio forms a hinge between the old and the new mindsets. The dominant perspective focuses on spectrum, then signals, and finally radios. The “Internet perspective” starts with radios as smart edge devices, and considers how they enable efficient information transport. Spectrum still plays a role as a means of communication, but it is no longer primary.


[1] Mitola, Joseph III, and Gerald Maguire (1999), “Cognitive Radio: Making Software Radios More Personal,” IEEE Personal Communications, August 1999, pp. 13-18

[2] Haykin, Simon (2005), “Cognitive Radio: Brain-Empowered Wireless Communications”, IEEE Journal on Selected Areas in Communications, Vol. 23, No. 2, February 2005, pp. 201-220

[3] See e.g. Yochai Benkler, “Some Economics of Wireless Communications,” (2002); Kevin Werbach, “Supercommons,” (2004); David Weinberger, “Why Open Spectrum Matters,” (2003)

Prize for gratuitous use of a USB port

The USB mug warmer - with the excuse that it's also a hub, to lessen feelings of guilt.

More on Thinkgeek:

Sunday, October 22, 2006

Phone 2020 - disrupting business models

The combination of flexible use spectrum licenses, software-defined radio, and mesh architecture could be a disruptive change in the next decade:

We’re just at the threshold of the new spectrum licensing regime. We’ve been seeing spectrum prices trend down, and that’s likely to continue particularly with more spectrum in flexible use. I think few people really understand what impact this regime will have effective spectrum availability

SDR combined with lots of places where the radio can operate will blur the boundary between licensed and unlicensed, and between PAN, LAN and WAN services. As far as users are concerned, it’ll be “just wireless”. SDR as such isn’t all that special; multiple cheap radios are already pointing the way. However, SDR will make system integration a little easier – provided that the power/compute challenges are met.

We’re asymptotically approaching the Shannon limit for single channel communications. However, multi-user information theory is still an active research field. It may turn out that Shepherd, Reed et al. were right that one can create architectures where capacity scales linearly or better with the number of nodes. (The best results I’ve seen so far scales with the square root of the number of nodes, so per-node capacity still decreases.) If this dream comes true, bandwidth per bandwidth (bits per hertz) could be much higher than current prudent planning assumes.

If you add these three together, the business of cellcos changes dramatically. Spectrum scarcity as an input constraint will be considerably weakened, as will the cellcos’ balance sheets – typically half the value of such companies is their spectrum licenses. If cellcos are to survive in this scenario, they’ll have to change their business; for comparison, once anyone could access airline ticket databases, travel agents could no longer live on commissions alone.

The change is analogous to the experience with watches. A hundred years ago, one paid for a watch in order to buy a timekeeper. Nowadays, we pay either for style (luxury watches), or for time management (calendar software, personal organizers).

Today we still buy communications when we pay for a cellphone; in fact, we’re buying the right to use a sliver of spectrum. In the future we’ll either buy style (designer phones) or comms management.

Communications management is required at various network layers. To simplify simply, we need comms management at the transport layer and the experience layer.

Transport management is a future business for cellcos: seamlessly combining personal, local and wide area networks into a cheap, high quality communications service. T-Mobile’s combined offer of telephony over Wi-Fi and cellular networks is an early incarnation of such a service.

Experience management is the business of content portals. MySpace is today’s paradigmatic example. Teens use text messaging and MySpace to stay in touch – not email. Places like MySpace provide the social structure which anchors the concentric conversations (1-1, communal, ambient) that will replace phone calls.

One can describe a comms management provider strategy as a “tuple play,” by analogy to the “triple play” and “quadruple play” in consumer broadband access. The goals is to integrate multiple communications services (the tuple) into a single product.

Thursday, October 19, 2006

Phone 2020 - Trends and Constraints

The phone of a decade from now will not just be shaped by usage scenarios; the technical constraints also matter. Some are more stringent than others. Let's decompose a phone into its main hardware components: processing, storage, interface, connectivity and battery.

Storage is the least constrained component. Memory capacity is growing faster than Moore’s Law. The storage in the phone is essentially unlimited. The video comm-pod is just the start; we’ll be carrying around most of the data we need with us for the unlikely event that we’re not connected.

Compute power is also not constrained much. The key issue will be power consumption (see batteries, below). However, it’s reasonable to assume that Phone 2020 will offer all the fancy graphics and natural language computation that algorithms can offer (for example, speech synthesis that reads messages like Winston Churchill). Multiple radios and software defined radio (SDR) pose power consumption challenges, too.

The small size of the phone poses some user interface challenges. Audio and video input/output aren’t a problem. The standing challenge is quiet message input. Small keyboards are the current best bet, but they’re only marginally useful. Projected or roll-up keyboards don’t offer the tactile feedback required for efficient typing. Chorded input is the obvious solution – if it weren’t for the fact that chord keyboards have failed in the market time after time. Designers will try all sorts of other whacky options in coming years, from sub-vocalization to reading brainwaves.

Data connectivity is limited by the spectrum available, channel use efficiency, and electrical power. Ultra wideband will provide very fast short-range communication; system capacity for ranges beyond one’s personal space is more debatable. We are approaching the Shannon limit for single channel communication (cf. turbo and LDPC codes; MediaFLO uses turbo codes), but the wild card is multi-user channels, that is, mesh networks. It boils down to the question: is spectrum scarce? Since technology is driving up both data rates and user demand, the answer is up in the air. If spectrum capacity becomes a non-issue, we’ll see a lot of person-to-person live high definition video, and cellular companies as we know them today will disappear.

Battery technology is the biggest obstacle to size reduction and strange new form factors. Reducing battery size while maintaining power output increases power density; keeping size constant while increasing the charge (and thus battery life) does the same thing. Sony’s exploding batteries show how close to the hairy edge we are at the moment. If we can somehow break through the energy density barrier the world will look very different in ten years: tiny wireless earphones that go for weeks on a charge, self-winding phones, paper-thin devices, and even long-range wireless power supplies.

Monday, October 16, 2006

Phone 2020

Kenn Cukier challenged me to think about the future of the mobile phone 10-15 years (gulp) from now.

Phones then will be as diverse as timepieces today. At the one end of the range there will be “statement” phones that fill the same niche as luxury watches. They will be beautifully designed and fiendishly expensive. At the other end, just about every device with a battery will be wirelessly connected, just as every electronic device today will tell you the time.

Watches have changed our sense of time. In the (very) old days, community bells marked a few key moments in a day. The railways and telegraph created a world of scheduled communal events. Today, personal organizers chivvy us from one meeting to the next, and professionals on billable hours slice their day into fifteen minute slivers.

Concentric conversations

Phones are changing our sense of being in touch. Like watches, it’s a cumulative process; every successive communications technology has connected us more efficiently to people who are not next to us. By the time of the 2020 Phone we’ll have a very more richly textured sense of being connected to individuals and groups, and to the world. We’ll have a richly textured sense of our social world. We’ll be engaged in layers of conversation: foreground chats with one or more people, the background hum and buzz of the social groups we’re tracking using an evolved form of the in-game voice chat that’s common with on-line multi-player games, to changes in the phones shape and color giving us ambient clues to what’s going on in our world.

Google and Wikipedia changed my sense of what’s knowable about the world: I feel a little loss when I’m not connected, because I can’t instantly find the answer to a question the world poses to me. Blogs and social networking sites are changing our sense of the knowability of the social world; by 2020, we’ll have this with us all the time, and we’ll have a deeper knowing of our environment – physical, intellectual, and social.

Overlapping rings of always-on conversations chance the notion of a “phone call.” The 2020 Phone will be a means to plug into multiple concurrent conversations, and bring individual threads into focus when we need them.

Form factors

Phones will look and feel different, though they’ll still be hand-sized:

New materials will allow them to fold up small but expand when we need them, like the 2secondtent.

Phones will fragment into a constellation of wireless objects. Headsets are already disconnected. Visual output will be routed to whatever screen’s available – perhaps the video screens coming soon to a restaurant table near you.

Earbuds will become almost invisible; they’ll be wireless themselves, of course, and will have noise cancellation built in. They’ll layer our sonic social world over the world just outside our ear.
If people still wear glasses, they’ll contain phones and use the lenses as display devices.

Phones will glow, vibrate, change their texture, shape and color as ways to signal to the user, and for the user to signal to other – the phone as a chameleon crossed with a coal mine canary. Ambient offers a variety of products, and a chipset other manufacturers can incorporate into their products. The Nabaztag rabbit changes color and moves its ears depending on what’s happening remotely. In the end, we get to “phoneskin signaling.”

Some clothing will provide inputs for brain UI: hats; sweatbands; dew rags, perhaps. Today’s versions are not fashion statements, but we’ll get there… It also takes 5-10 minutes to write a sentence; but with a decade’s development, we’ll be able to compose Crackberry messages without being so obvious about it.

Phones as sensors

Perhaps the biggest change in phones is that they will become networked sensors. Phones already sense their geographical location. Going beyond this:
  • Microsoft researchers have turned phones into barcode scanners that connect to atabases of personal items; RFID integration is an obvious next step
  • IntelliOne has developed a system that detects traffic jams by monitoring the signals from cellphones in cars
  • The PigeonBlog project has fitted birds with GPS sensors, air pollution sensors and a basic cellphone to measure air quality in California. If birds can do it, why not humans?
  • Michael Reilly describes how cheap sensors are turning pollution monitoring into an activity anyone can take part in. In general, phones will be used to measure any situation and integrate the results from thousands of other devices; not just pollution and traffic, but micro-climate, crowding, noise level, and smells. The whole will be greater than the parts because of collective data gathering; cf. digg and its ilk.
  • Personal medical sensors will connect to medical advisors via the phone.

Commercial uses

The applications described so far are consumer-centered. However, commercial uses will be just as pervasive, though less visible. Any connected device will be able to go wireless. Shopping trolleys will keep a running total of the goods deposited in them, and prepare a bill for when you leave the store. Many products will call-home functionality built in, funded by the manufacturers; they’ll want to know how their product is used, and when it needs to be serviced or replaced. (Yes, this could be a privacy issue when it’s a consumer product, but only the privacy advocates will care. Ordinary people only worry about privacy when it touches their wallets, as in identity theft.)

Tuesday, October 10, 2006

Translating between Marxese and AdamSmithish

When academic sociologists and policy types get together, socialist jargon is a given. Ironically, there are structural similarities between their research topics and the way capitalists use free-market clichés to describe their problems.

I attended a workshop on “The Global Rise of Horizontal Communication: Social Networks, Civil Society and The Media” last week. It was organized by ARNIC and held at the Annenberg Center at USC. Terms like social movement, power, collective processes, media systems and mobilization were rife.

However, with a bit of translation it sounded just like a business conference. Case studies were the order of the day – research focused on describing what was happening, with little attention to underlying Why questions – and the transformative power of technology was taken for granted.

The key to the translation was this mapping from left-wing sociology to right-wing commerce:

The setting and mechanism: social movements → markets
The desired outcome: social change → economic growth
The key action: appropriation → innovation
The hero: activists → entrepreneurs

With this handy guide, visitors from each world can learn a lot from the other. Social change activists could benefit from the clear statements of goals and metrics that are de rigueur in business. On the other hand, businesses could learn from the focus on inclusivity and grass-roots action espoused by communications theorists.

Wednesday, September 27, 2006

A little Basho

Sam Hamill co-authored one of my favorite books, The Essential Chuang Tsu. S. found another of his translations at the library, and it’s just as good: Narrow Road to the Interior, which is an “essential Basho”.

Matsuo Basho is one of the great exponents of haiku. I hadn’t realized, until I read this work, that his poetry was grounded in his travels. Basho's ability to evoke a place and the traveler’s response to it reminds me of my favorite travel book: Robert Byron’s The Road to Oxiana.

Here’s an excerpt:

Set out to see the Murder Stone, Sessho-seki, on a borrowed horse, and the man leading it asked for a poem, “Something beautiful, please.”

The horse turns his head –
from across the wide plain,
a cuckoo’s cry

Sessho-seki lies in dark mountain shadow near a hot springs emitting bad gases. Dead bees and butterflies cover the sand.

Sunday, September 10, 2006

Is meditation maladaptive?

All people seek happiness, and try to avoid anxiety and pain. It’s an elusive goal. For many, the solution is changing the ingrained habit patterns of the mind.

“All meditative traditions, whatever the differences in underlying belief
systems and in specific techniques, agree in one essential respect: the cause of the dissatisfaction, anxiety and suffering which seem to be inseparable from our lives lies in a basic misinterpretation of the true nature of existence, a misinterpretation which clouds our perception of the actual facts, in consequence of which we persist in futile attempts to pursue and secure things (such as health, riches, happiness and so on) which are, by there very nature, ephemeral or unattainable.”

--- Amadeo Solé-Leris, “Tranquillity & Insight,” Buddhist Publication Society 1992, p. 9

The Buddha, like many other spiritual guides, contends that unhappiness is rooted in endless unsatisfiable craving: for things we want but cannot have, and for things we do not want but cannot avoid. This craving is shared by all beings, and happiness can only be found by casting off this desire.

Doing so is, at best, very very hard. At worst, it’s a losing battle. After all, if all creatures are driven by craving, it must be a very adaptive behavior in evolutionary terms. It’s a good default for any creature to strive for more than it has, and to avoid what is unpleasant with all its might. If you do this you won’t be happy, but your genes will prosper. If craving and aversion is built in by evolution, then trying to switch it off seems maladaptive (not to mention futile).

Still, meditation traditions have themselves survived cultural evolution; there must be some benefit to their practices. Perhaps society has progressed to the point where it is safe enough – that is, humans are powerful and wealthy enough – to benefit from reducing craving and aversion. The swelling of a twisted ankle must be evolutionarily adaptive; and yet, athletes are advised to ice their injuries in order to accelerate recovery. The over-reaction of the immune system in allergies makes sense as a general purpose response, but it is not adaptive in Spring-time, and we use drugs to mitigate it. Likewise, meditation looks like another technology that humans developed to improve their lot as culture has lessened the threats held by nature.

Tuesday, August 08, 2006

AOL driveby haiku

I feel like a gawker crawling past a traffic incident... CNET has an excellent collection of excerpts from the AOL search log repository. They read like poetry. I’ve assembled some “found haiku” out of them.

User 1515830

chai tea calories
divorce laws in ohio
curtains; i hate men

User 4331025

wastewater jobs mass
revenge for a cheating spouse
first date dos and donts

User 100906

should you call your ex
hes just not that into u
addicted to love

User 3544012

harley performance cafe
circumsize pictures

User 591476

how to stop bingeing
pregnancy on birth control
how to starve yourself

I’m almost as fascinated by what the searches reveal about people’s attitudes to the technology as what it tells us about their lives. While many users just type in keywords, from time to time they let their guard down. It almost sounds as if they’re looking for someone to confide in. For example, user 1515830 types in some crisp searches like “chai tea calories” and “curtains”, but also let’s slip “can you adopt after a suicide attempt” and “i hate men”. Eliza, where are you when we need you?

Monday, August 07, 2006

Not your father’s paper

1985 doesn’t feel all that far away, if you’re beyond a certain age. Ronald Reagan started his second term, the hole in the ozone layer was discovered, and Rock Hudson died of AIDS. And yet, judging by newspaper reading habits, we’re now living in a completely different world.

In 1985, 45 percent of newspaper readers spent some or a lot of time reading about TV/movie/entertainment schedules; it’s down to 29 percent today.

In that year, 44 percent read the business and financial news; it’s grown to 60 percent.

I was most struck by two topics that didn’t even appear on the list in 1985. Nowadays, 63 percent read articles on technology, and 77 percent follow health and medicine topics. Technology, health and medicine are so much part of “our modern world” that it’s hard to imagine that they were barely covered a mere twenty years ago.

Source: A Pew Research Center study on the changing news landscape

Saturday, August 05, 2006

Science vs. religion

Chris Davis brought me up short in a letter to the editors of New Scientist (29 July 2006). He points out that one of my basic assumptions – that science and religion can co-exist amicably – may be a convenient fiction.
“[S]cience and religion tenaciously pretend - at least when in each other's company - that they "respect" each other. In recent times this nonsense has started to dissipate, and the camps are becoming more honest about their mutual antipathy.

“And they are right to be so. Both science and religion claim superiority in the fundamental search for truth and the nature of reality. They encroach absolutely on each other's territory, as they battle for the minds of the populace. There is no reason to be abusive to each other, but to deny that a conflict exists at all is naive, and confusing for honest seekers after truth encountering the matter for the first time.”

Descartes crafted the entente between science and religion. The dualist assumption of two non-interacting worlds meant that scientists and priests could each have their own domain: body/mind, things/souls, physics/metaphysics. This carved out room for science to flourish unencumbered by the authority of the Church.

The Intelligent Design debate may signal a return to the struggles of the 17th Century, such as those over the ideas of Spinoza. It’s a fight for hearts and minds, the stuff of politics and propaganda. Davis’s closing line reveals the why school textbooks are the battle ground: “It is these undecideds and newcomers - especially children - to whom both sides owe honesty.”

Wednesday, August 02, 2006

No big bang

Just after I post something on Vista, I (belatedly) read a Ballmer comment on the topic. From InformationWeek:
Microsoft made one big, wrong decision that led to Vista's delays, Microsoft CEO Steve Ballmer told financial analysts during his meeting with them last week. The
company took a Big Bang approach and tried to overhaul all of its operating system's core components simultaneously, an approach that eventually led to a fiery development crash. "We made an upfront decision that was, I'll say, incredibly strategic and brilliant and wise -- and was not implementable," Ballmer said. "We tried to incubate too many new innovations and integrate them simultaneously, as opposed to letting them bake and then integrating them, which is essentially where we wound up."

In the heyday of "Integrated Innovation" I told anyone who would listen that it was a misguided strategy, and that the company should "Innovate, then Integrate". Just a pity I wasn't able to persuade the people who mattered.

Of course, they may still not get it. It's unnerving when a CEO believes that a decision that was not implementable could still be "brilliant and wise".

An endless vista

After yesterday’s bad news about Microsoft’s next operating system (Vista testers to Microsoft: Even the bugs aren't stable yet), I’m beginning to think the unthinkable: perhaps Vista will never ship.

Oh sure, something called Vista will ship next year; marketing and licensing imperatives demand it. And sure, it will be but a pale shadow of the vision when the project started; no product ever makes it out the door will all the intended features. I’m beginning to doubt, however, that the platform that ships will be what Microsoft needs it to be. It could become the Big Dig of Redmond. Even though it is eventually finished at vast over-runs, major flaws will continue to appear throughout its life.

There was relief at senior levels in Microsoft when Windows XP shipped. It wasn’t clear, even then, whether a substantial upgrade to such a complex product could be accomplished. Many years have passed since then, and many demands have been added to the wish-list. The code base is a hairball, the result as much of business decisions to lock in customers and preclude anti-trust action as of engineering philosophy. It is so large, interconnected, and poorly documented that any upgrade is a mind-boggling feat.

The important question, though, is not whether and how Vista ships; it’s what happens next. My impression is that the Windows NT code base, as built, is too complex to be the basis for substantial growth. However, there is no alternative in the wings. The NT code base, on which XP and Vista are built, has run out of steam sooner than planned. There is no code which is to Windows NT, as NT was to Windows 95.

The catch is that Microsoft doesn’t have the means to start from scratch, even if it had the time. It doesn’t have the resources to fight a two-front war. Wall Street balked at the investment that it’s making to match Google; it cannot now spend yet another $2 billion to replace its core operating system.

The twilight of the operating system as the engine for innovation may come sooner than I expected. The company may quietly decide – Steven Sinofsky and Steve Ballmer may already have decided – that future OS upgrades will be incremental rather than substantial. Attention is shifting to the network, and hosted applications. Microsoft will be fighting on Google’s ground.

Monday, July 31, 2006

Mental models for wireless spectrum

Technologist David Reed once said, “There's no scarcity of spectrum any more than there's a scarcity of the color green." [1] This quip presumes the technically correct meaning of spectrum as a range of vibration frequencies of electromagnetic waves. However, it’s clear that when most people talk about spectrum, they don’t mean a vibration frequency. What do they mean?

I've been working through examples of how lay people conceive of spectrum, and talk about spectrum policy. An early draft (Word doc; source data in spreadsheet form) explains why some policies and proposals make sense to us, and other things don’t. I’m not making claims about how experts think, though I suspect that the metaphors I’ll describe is at the root of their thinking, too.

In summary, spectrum is conceived of as a spatial resource, with two common variants: spectrum as a set of containers (bands), and spectrum as land. There are two common mental models of wireless signals: as objects moving through space, and as sounds, particularly speech. This leads to two mental models for interference, which entail conflicting, and sometimes incorrect, deductions.

Spectrum, as the concept is treated by regulators and politicians, is a resource used for communication which is, in the first instance, under state control. Its assignment is thus the stuff of politics, that is, arguments over the distribution of scarce resources. The spectrum-as-land model is “natural” to most people because the underlying spatial metaphor, of real estate in particular, fits our notion of land resources.

The results of this analysis can be used to identify policy-making pitfalls. For example, Hatfield & Weiser [2] explain why the transition to a property rights model for spectrum is far more complex than commonly portrayed; this work hopes to explain why a model of real property rights is attractive in the first place.


[1] Quoted by David Weinberger in The myth of interference, Salon 12 March 2003. Curiously, if one considers electromagnetic radiation in optical fiber, there is indeed “a scarcity of the color green” because each fiber supports a finite number of wavelengths. For example, links in the National LambdaRail network use dense wavelength-division multiplexing (DWDM), which allows up to 32 or 40 individual optical wavelengths to be used (depending on hardware configuration at each end). Once those wavelengths are occupied, no more are available.

[2] Hatfield, Dale N and Philip J Weiser (2006), “Property Rights in Spectrum: Taking the Next Step,” University of Colorado Law School, Paper Number 06-20, June 2006,

Sunday, July 16, 2006

Ted’s Tubes and Larry’s Lanes

The digerati are having a good snigger at Ted Stevens, Chairman of the US Senate committee that’s deciding how the Internet will be regulated. The Daily Show recently lampooned (video clip) his attempts at explaining network neutrality. Senator Stevens said, to much derisive merriment among the net-savvy studio audience:
“The Internet is not something that you just dump something on. It’s not a big truck. It’s, it’s a series of tubes. [...] And if you don’t understand that those tubes can be filled, and if they’re filled when you put your message in it, it gets in line, it’s gonna be delayed by anyone who puts in that tube enormous amounts of material, enormous amounts of material.”

It was amusing because homespun analogies seem out of place coming from a person deciding our high tech future. Ted got a bad rap, though, because no-one can avoid this kind of language. Cognitive science suggests that we have no choice but to use mental models based on the tangible world to reason about intangible things like interpersonal relationships (“we’ve been close for years, but we’re drifting apart”), mathematical abstractions (“the real numbers are points on a line”), and the Internet.

Everyone in this debate misuses metaphor, including Larry Lessig, the house theoretician for network neutrality. An op-ed he wrote for the Washington Post last month with Robert McChesney was premised on an extend (and well-worn) metaphor: The Internet-as-Highway. Some excerpts:

“Congress [will decide whether cable and phone companies] can put toll booths at every on-ramp and exit on the information superhighway. [...] Net neutrality means simply that all like Internet content must be treated alike and move at the same speed over the network. [... Those companies] would be able to sell access to the express lane to deep-pocketed corporations and relegate everyone else to the digital equivalent of a winding dirt road.”

Sen. Stevens uses the Internet-as-Pipes metaphor, and Prof. Lessig prefers Internet-as-Roads. There’s little to choose between them. Both convey some truth, and both have shortcomings.

The superhighway metaphor is inaccurate in that the networks making up the Internet are owned by private agents, whereas most of the highway network is owned by the state. The notion of “speed” is also technically inaccurate: all packets on the Internet move the same speed (some fraction of the speed of light). More packets get through per second on some parts of the network, but because “there are more parallel lanes” rather than because they speed along faster. The Highway metaphor also implies that two distinct Internets will be created side-by-side (echoes of Separate But (Un)equal) , whereas in fact all traffic will move over the same infrastructure, but be prioritized differently.

The choice of analogy has consequences, though. One could mangle poor McLuhan again by saying that the Metaphor is the Message. We think of the highways as a public good, provided by the state for the benefit of all, where everyone is entitled to equal access. This resonates with those on the Left, like Prof. Lessig. The ownership of tubes and pipes varies, but is open to notions of private property and investment, which is agreeable to those on the Right, like Sen. Stevens. Plumbing is invisible, and can safely be left to experts to worry about, whereas roads are something we all feel we have a daily stake in.

The highway metaphor is the more common one in the debate to date. It is perhaps more intelligible; we have more experience with roads than plumbing. However, one shouldn’t forget that it was forged during the last rewrite of the telecom act, which happened during a Democratic administration. I wouldn’t be surprised to see the Right experimenting with alternative metaphors which are better at connoting their values and agenda – plumbing, perhaps, or airlines.

Wednesday, July 12, 2006

Follow the clicks

Spyware is the current BusinessWeek cover story, The Plot to Hijack Your Computer. The technology is now officially mainstream.

Spyware, or “adware” in the terminology of its proponents, figures out our preferences by tracking what we do on the web, and then presents us with tailored pop-up ads. It got a bad name because the software often installs without a user knowing about it, monitors user behavior and relays it back to base, and sometimes disables PCs in the course of trying to disable competing spyware programs.

However, spyware/adware part of the future personal computing because it’s a way to make the dream of “ad supported software” come true. BusinessWeek reports that a company with access to 10 million computers can make about $100,000 a day; that’s 1c/day/computer, or $3/year. According to Om Malik writing for Business 2.0 magazine, Google makes around $16 per user per year in advertising; another $3 would be a 30% increase.

Spyware will be tamed over the next few years, and its technologies incorporated into Yahooglesoft products. If Yahoo, Google and Microsoft were as savvy about regulatory politics as the phone companies, they’d be in Washington DC and Brussels right now trying to craft safe harbor regulations which would allow them to take this technology into the mainstream while marginalizing the cowboys. And consumers will probably lap it up: they don’t like being spied on, but they don’t like paying for stuff even more strongly.

Friday, June 30, 2006

A connectivity compromise

The effort to prevent US consumer broadband providers from charging anyone except end-users for improved quality of service has stalled in the Senate. Andrew Orlowski skewers the paranoia of the Neutralistas as only the Register can. He writes:

“Rather than confront the underlying, and very real problems it seeks to redress, the blogging wing of the US left has instead created an alternative cyber-reality - populated by phantom demons, imaginary conspiracies, and bogeymen. [...] The immediate consequence of the focus on "Neutrality" has been to permit the cable lobby to write the most anti-competitive bill for thirty years. Perhaps they knew the bloggers were only playing a game, and wouldn't think to look at the rest of the legislation.”

People may at last be in a mood for a compromise. Here’s one: wireline network operators may not block traffic but they can prioritize it, as long as any content provider can buy prioritized access on equal terms. The conditions can be lifted if true competition in consumer broadband materializes.

The situation

There is fear on both sides:

  • The content community fears that the network operators could use their market power to integrate vertically, lock out new entrants, and extract rents.

  • The operator community fears that anti-competition rules will have unintended consequences that suppress their profitability below sustainable levels.
  • One can address both sets of fears by recognizing that market power should be mitigated, while taking into account that competition in last-mile broadband would reduce the need for such actions.

    A solution: the Open Offer Internet

    I start with the premise that there is insufficient competition in last mile high speed broadband networks, and that this concentration is likely to suppress innovation and raise prices, thus decreasing consumer welfare. This situation justifies the imposition of "Open Offer" conditions on both telephone and cable companies that offer broadband access:

    1. No traffic blocking; all sites to be accessible to consumers

    2. The operator can enter into arrangements with 3rd parties to improve content delivery, but this offer should be available on reasonable and non-discriminatory terms (taking into account discounts etc.) to any comer.

    3. Operators shall interconnect with all other broadband networks on reasonable and non-discriminatory terms.

    This is not a perpetual mandate, and can change if the competitive situation improves. There would be a review every few years:

    • The FCC reports on compliance with the Open Offer terms. The FCC can get access to confidential company information to make this assessment, but may not make such information public.

    • Operators can request for Open Offer conditions on them to be lifted, if they can prove that the markets they operate in are all competitive.

    • The FCC can (re)impose Open Offer conditions on operators if they see anti-competitive behavior.


    Operators may offer tiered service tiers to consumers if they wish.

    I don’t use the FCC definition of broadband; saying that anything faster than 200kbps is broadband is just silly. Today, “high speed broadband” effectively means speeds faster than 2 Mbps. This will always be a moving target, so it’s better to define it in relative terms. For example: define the threshold of high speed broadband as the lowest speed provided to the top 20% of homes.

    Thursday, June 29, 2006

    Useful self-delusion

    Susan Stamberg’s interview with Lawrence Summers [1] puts on view the kind of person that cannot conceive of personal failure. I don’t think he’s denying responsibility for his failure; he is simply unable to see it.

    This characteristic is so common among leaders that it’s probably a requirement for success. Such people inspire loyalty just because they always see the bright side of every situation. They can persuade others that they’re on the side of right because they believe themselves to be so. When something goes well, it must be because of their actions; when something goes wrong, it must be someone else’s fault.

    Their motivational ability follows from an inability to see their own flaws. In a sense one can’t blame them for not taking responsibility; personal failure is just not the truth as they see it.

    Jeff Skilling of Enron fame is another recent example. A Wall Street Journal article [2] reports that Skilling believed that if he just told the "real" story of Enron, he'd be in no danger. This led him to providing prosecutors with pieces of information that they effectively used against him at the trial. He didn’t believe then, and doesn’t believe now, that he committed any crimes, even though a Houston jury convicted him of 19 counts, including conspiracy, fraud and insider trading.

    The “little people” find it hypocritical when leaders insist that employees take responsibility for their actions, but then don’t hold themselves accountable. The beam in the CEO’s eye doesn’t prevent him from seeing the splinter in everyone else’s... But be gentle; how can it be hypocritical if the Chief Ego Officer truthfully doesn’t believe that they’ve done anything wrong?


    [1] NPR Morning Edition, “Summers Looks Back at Harvard Presidency,” 29 June 2006

    [2] John Emshwiller, The Wall Street Journal, “In New Interview, Skilling Says He Hurt Case by Speaking Up,” 17 June 2006