Saturday, December 31, 2005

Knowledge as a factor of production

Jefferson's crop rotation
According to elementary economics texts, the raw material for any productive activity can be put in one of three categories: land (raw materials, in general), labor, and capital. Some economists mention entrepreneurship as a fourth factor – but none talk about knowledge. This is strange since know-how is the key determinant for the most important kind part of output: increased production. Still, it’s not that strange, since knowledge has unusual properties: there is no metric for it, and one can’t calculate a monetary rate for it (cf. $/acre for land).

An example from agriculture

Imagine that you are a crop farmer. Your inputs are land and other raw materials like fertilizer and seed; your labor in planting, cultivating and harvesting the crop; and money you’ve borrowed from the bank to pay for your tractor. You can increase output by increasing any of these factors: cultivating more land, working more hours, or borrowing money to buy better tractor or better seed.

However, you can also increase output through know-how. For example, you might discover that your land is better suited to one kind of corn rather than another. You could make a more substantial improvement in output if you changed your practices, for example by implementing crop rotation. Farmers in Europe had practiced a three-year rotation since the Middle Ages: rye or winter wheat, followed by spring oats or barley, then letting the soil rest (fallow) during the third stage. Four-field rotation (wheat, barley, turnips, and clover; no fallow) was a key development in the British Agricultural Revolution in the 18th Century. This system removed the need for a fallow period and allowed livestock to be bred year-round. (I suspect that if a four-crop rotation had been invented now, it would be eligible for a business process patent.)

Most of the increases in our material well-being have come about through innovation, that is, the application of knowledge. How is it, then, that knowledge as a factor of production gets such a cursory treatment in traditional economics?

Measuring Knowledge

A key difficulty is that knowledge is easy to describe but very hard to measure. One can talk about uses of knowledge, but I have so far found no simple metric.

It’s even hard to measure information content. There are many different perspectives, such as: library science (eg a user-centered measure of information); information theory (measuring data channel capacity); and algorithmic complexity (eg Kolmogorov complexity). All give different results.

One can always, of course, argue that money is the ultimate metric: the knowledge value of something is what someone will pay for it. However, this is true for anything, including all the other factors of production. The difference is that land, labor and capital all have an underlying “objective” measure. One cannot calculate a $/something rate for knowledge in the way one can for the other three.

Let’s say land is measured in acres and labor in hours, and money in dollars. You’ll pay me so much per acre of land, so much per hour of labor, and so many cents of interest per dollar I loan you. Land in different locations, labor of different kinds, and loans of different risks will earn different payment rates.

Knowledge does have some value when it’s sold, e.g. when a patent is licensed or when a list of customer names is valued on a balance sheet. However, there’s no rate, no “$/something” for the knowledge purchased. That suggests that the underlying concept is indefinite. It is perhaps so indefinite that we are fooling ourselves by even imagining that it exists.

Treating knowledge as a physical object

The Knowing Is Seeing metaphor is pervasive and intuitive, and an essential part of our philosophical toolkit via Descartes’ work (see Ch.19 of Lakoff and Johnson’s Philosophy in the Flesh (1999) for a detailed discussion). The metaphor treats an Idea as an Object Seen, Knowing as Seeing, Knowing an Idea as Seeing an Object, etc. The catch is that the objects that we think with in this metaphor are, necessarily, objects – and physical objects don’t behave at all like ideas. Most importantly (regular readers will have seen this coming a mile off, and winced at the prospect of the deceased pony being thrashed again), objects are rival, and ideas are non-rival.

This little problem has always been present in economics, but it hasn’t been critical since knowledge has always been wrapped in stuff. Until the advent of software packages like TurboTax, one bought expert advice by the hour from a person. The advice was intangible and non-rivalrous, but its carrier wasn’t; if I was using the accountant’s time, you were getting less of it. However, knowledge embodied as software is “doubly non-rival”; not only is its knowledge content non-rival, but the software itself is too: my use of TurboTax doesn’t diminish your ability to use your copy.

The bottom line

As the economy is increasingly built out of knowledge, and as the absolute cost of physical goods continues to drop, we are effectively sloughing the husk of stuff off the knowledge we depend on. Managing our way into the future effectively is forcing us to think more keenly about knowledge as a factor of production.

To use ten dollar words: the intersection of epistemology and economics is a necessary and fruitful area of research as the knowledge economy grows in importance

Friday, December 30, 2005

Join with your opponent

My tai chi teacher Yang Jun said last night that you should ‘join with your opponent’ to respond to a punch. He showed how hard it is to meet a punch head-on; your timing has to be very good, and you have to yield in just the right way to absorb the force and turn it away without hurting yourself. (He could do it easily, of course.) It’s better to swing your arm down across the direction of the strike, like a propeller in front of your body. Once you make contact, your arm naturally spirals around your opponent’s forearm, swinging it out of the way.

Master Yang explained that the philosophy of ‘joining with your opponent’ before attacking was part of Chinese culture. It’s the yin/yang philosophy: if you want to push, start by pulling; if you want to go horizontally, start in the vertical. This attitude is deeply ingrained in tai chi, which is a ‘soft’ martial art; one of its guiding images is that the energy of a master is like ‘steel wrapped in cotton’.

These ideas seemed applicable to the long-term geopolitical contest between the United States and China. The increasingly close economic ties between the two countries make a war seem implausible. However, let’s just imagine that China sees the US as an opponent. Its current deep engagement with the US economy, through selling its products and buying US debt, could be seen as intertwining itself with its opponent. If China needs to respond later, it will know exactly where to push.

The United States’ geopolitical style, by contrast, is hard. It eschews contact with states it opposes (Iran, North Korea, Cuba). When it does engage, it does so only as a last resort, and then deploys full frontal force, to an overwhelming and disproportionate degree (cf. the Powell Doctrine).

Monday, December 26, 2005

The Not Rule

As we head into the season of reviews and prognostications I’ll be keeping the Not Rule handy.

I originally formulated it in this way:

When a corporate executive makes a statement under pressure, pre-pending “Not” to any statement will get you closer to the truth that what is said.

It was designed to apply to statements like these: I am very confident about the prospects for our company in the coming year. I am excited about the challenges that lie ahead. There is no doubt that our product blows the competition away. This area represents a massive opportunity. The entire organization is connected by a common vision. The team is excited and engaged, and morale has never been better.

Of course, it applies more generally to leaders (and the rest of us…), as in statements like:

Peace in our time.
There is no alternative.
I am not a crook
I did not have sex with that woman.

George Bush II, who prides loyalty above all else, was the exception to this rule when he expressed his complete confidence in Donald Rumsfeld some months ago. Contrary to political custom, Rumsfeld was not fired shortly afterwards.

The Not Rule also applies to claims made about social trends. Because trends take so long to surface, they’re usually moot by the time debate about them becomes contentious. For example, the hue and cry over the lack of respect being paid to the Christian faith in America today obscures the fact that it wouldn’t be happening if Christianity were not so powerful. Likewise, breathless columnists are coming out in a cold sweat about jobs to be lost to India and China at exactly the time when those actually doing off-shoring have concluded that the process will take longer than expected.

To close, here’s a statement of my own: I am not a contrarian.

Everyday Patents - Web Search

Google The earliest of Google’s fourteen US patents, #6,678,681, was filed on 9 March 2000 by Sergey Brin for “Information extraction from a database”.

Brin appears as the inventor on three Google patents. His co-founder Larry Page is the inventor on patents 6,285,999 (“Method for node ranking in a linked database”, filed 9 Jan 1998) and 6,799,176 (“Method for scoring documents in a linked database”, filed 6 July 2001), but these patents have been assigned to Stanford.

I wonder how much Google is paying Stanford in licensing fees? It can't be much... Stanford's 2005/2006 Consolidated Budget shows $263 million in Other Income; I assume patent licensing fees are in there. The 2004 Annual Report notes that special program fees and other income were $259 million in FY04; this includes technology licensing as well as service centers, executive education and other programs.

It’s curious that Page’s Stanford patent 6,799,176 was filed about eight months after the first Google patent 6,678,681 by Brin.

Most of the patents are for search and query technologies, as one might expect. However, three are for hardware designs: Cooling baffle and fan mount apparatus , Cable management for rack mounted computing system , and Drive cooling baffle. Just in case anyone was in doubt, here's evidence that running a large data center is a key competency for Google.

Law as Code

Larry Lessig broke through to celebrity with his book Code and Other Laws of Cyberspace. He argues that the writers of software code create frames for behavior that can be as coercive as the law. I’ve started wondering about the reverse: treating laws as if they were software.

Our intuitions are grounded in how our brains use our bodies to interact with the physical world. Software confounds those intuitions because it’s doubly inexhaustible: it’s made up of ideas which can’t be “used up”, and the resulting product is itself perfectly copiable infinitely many times. Both the input and the output of manufacturing software is non-rivalrous, to use the economic jargon.

As we build a knowledge economy, we are surrounding ourselves with abstractions for which our body-based reasoning is ill-prepared. Examples beyond software include quantum mechanics, persistent exponential growth (eg Moore’s Law for silicon chips) and products built on pure probability (eg futures markets, and lotteries in general). Not all of this is novel, though. Laws, lotteries and logic have been around for millennia. However, people at large have not had to worry about their weirdness because they have only been parochial concerns to date. The pervasiveness of software can open our eyes – especially if we’re geeks and not wonks – to some of the curious properties of law.

One can think of the legal code as the operating system for a country. If the laws are the operating system, then contracts are the applications. There are many more contract lawyers that lobbyists, just as there are many more applications than operating systems.

The amount of code in a software program can be measured by counting the number of lines of source code, that is, the number of lines of human-readable instructions. Contemporary operating systems contain tens of millions of lines of code (Wikipedia cites line counts for some common operating systems).

I was surprised when I totaled up the number of lines in the US Code, the compendium of all the (federal) laws of the United States: about 5 million lines (spreadsheet). That’s about the same size as Windows NT or the Linux 2.6.0 kernel, at 4 million and 6 million lines of source code, respectively.

The “core development team” for the US Code is rather smaller than that for Windows or Linux, which are both said to be in the region of 8,000 people. The Washington DC legislature consists of 50 senators, 400-odd members of the House, and their legislative staff. If we assume a member to staff ratio of 1:3, that’s a team of 1,800 “developers”. Of course, one can’t forget the lobbyists, many of whom are lawyers who do the actual legislative drafting. Roberta Baskin, Executive Director of The Center for Public Integrity estimates that the federal lobbying industry employs about 14,000 people to influence the decisions of Congress, the White House, and officials at more than 200 federal agencies. Not all 14,000 are working on the US Code; many are working on agency regulations, which geeks might want to think of as the “middleware” of the legislative machine. (Note that I’ve ignored state law and local regulations in this approximation; it shouldn’t change the answer by more than about a factor of 2.) In all, the number of people writing the operating system for the United States is approximately the same size as the teams working on PC operating systems.

The analogy offers endless opportunities for harmless fun and mischievous comparisons.

Developers and lawyers quite similar: both write code, both worry about misplaced punctuation marks that could ruin everything, and both spend a lot of time on “edge cases”. Neither has ever seen a piece of code that they couldn’t do better, and both spend more time maintaining and tweaking legacy code than writing new stuff. However, it may take a little while for the maintenance of the US Code to be off-shored to India…

Legislation is infested with inconsistency; software tools that track links between code modules could help find discrepancies. S. remembers that her family was perplexed by what to do about an old tree in their garden. One regulation insisted that they cut it down, because it was old and rotten, and another insisted that it be protected, because it was just plain old. (They cut it down.) On the other hand, while tools can find buffer overflows in software, one needs the CBO to find budget overflows since legislation is code which is designed to run in the future, and have its worst side-effects when its drafters have happily retired to working as lobbyists.

One could see most of the activity in national and state capitals as the frantic “patching” of unintended side-effects in legal code. Tax lawyers seeking loopholes and hackers looking for trapdoors have similar goals – making the code do something it was not designed for. Unfortunately, it takes rather longer to patch the legal code than it does to issue a security update.

The judicial system is the “execution environment” for the code the makes up the code for a country. (In country as enamored of the death penalty as the United States, that computing term is more accurate than one might wish.) The courts figure out what the legal code actually does in practice. The function of the courts highlights a weakness in my analogy: laws are written in ordinary language with all its delightful vagueness, whereas computer code is written in mathematical symbols dressed up to look like language. In software, ambiguity is a bug; in law, it’s often a feature.

Saturday, December 24, 2005

Everyday Patents: Rubik’s Cube

Ernő Rubik invented a variety of rotating cube toys in mid-1970s. He was a Hungarian sculptor and professor of architecture with an interest in geometry and 3D forms. According to a Wikipedia article, Rubik obtained Hungarian patent HU170062 for the Magic Cube in 1975, but did not take out international patents.

The US Patent Office shows a series of patents by Rubik filed in 1984, referring to earlier filings in Hungary:

  • 5,184,822 “Three-dimensional puzzle” (1993, Hungary 1983): single cube with holes in side
  • 4,471,959 “Logical toy” (1984, Hungary 1980): Horizontal pushers in two layers
  • 4,410,179 “Shiftable element puzzle” (1983, Hungary 1977): Cylindrical puzzle, with two layers of six petals
  • 4,392,323 “Toy with turnable elements for forming geometric shapes” (1983, Hungary 1980): One-dimensional chain of triangular pieces
  • 4,378,117 “Spatial logical toy” (1983, Hungary 1980): Various 2x2x2 arrangements
  • 4,378,116 “Spatial logical toy” (1983, Hungary 1978): A two-layer puzzle, with 3x3 cells in each layer

Larry Nichols received US patent 3,655,201 in 1972 for a “pattern forming puzzle and method with pieces rotatable in groups”. The filing concentrates on a 2x2x2 design, but the drawings show larger compositions. The thing is held together with magnets. According to Wikipedia, Ideal Toys lost a patent infringement suit based on this patent in 1984.

Terutoshi Ishigi acquired Japanese patent JP558192 for a nearly identical mechanism while Rubik's patent was being processed, but Ishigi is generally credited with an independent reinvention.

Friday, December 16, 2005

Comment on Tim Wu's "How to lose friends and alienate people"

Doing business on the Internet is a complex affair, and a lapsed physicist living in a glass house shouldn’t throw stones at law professors opining on commercial matters. I shouldn’t, but I will.

Tim Wu’s opinion piece in CNET is an admirable attempt at extending the case for network neutrality. He argues that BellSouth’s hope to charge companies that want their sites to load faster than those of a rival isn’t illegal or immoral, but stupid. It’s an important step to finding the win-win-win-win for network operators, consumers, established app/content providers, and new app/content providers.

This is a welcome new line of reasoning from the “open access” camp, but it’s not persuasive yet. While it’s true that companies sometimes do things their customers hate, it’s typically by accident and not on purpose. No business can afford to alienate its customers over the long run; companies will do things that irritate some of their customers some of the time, but usually as part of a conscious trade-off.

Buyers and sellers are always engaged in a tussle: vendors want to sell for as much as possible, and customers want to buy as cheaply as possible. In the end, if they decide to do business, both settle for less than they’d like but more than they’d otherwise get. Both sellers and buyers make trade-offs, and a trade-off that one (class of) customer dislikes isn’t necessarily bad business overall.

Wu argues that BellSouth’s model, that is, trying to add value to its pipes by privileging some traffic flows over others, “neglects the market values of neutrality and consumer choice”. I’m not persuaded that the likes of BellSouth “err by thinking that their customers want their services, as opposed to better access to an open market.” Only policy wonks worry about “access to open markets”. Most consumers just want products at the lowest price for the highest quality. Open markets often provide this, but are a means to an end for consumers, not an end in itself. If they can get a better product for no additional cost – if, say, Real Networks paid BellSouth a premium to ensure that a Rhapsody media stream gets Platinum Tier treatment even though a customer has only paid for Silver Tier network performance – the consumer will take it.

Wu argues that neutral products and neutral networks are usually more valuable to customers, but neglects to explain how a company should balance this with the fact that such neutral systems are usually less valuable to sellers. As Isenberg and Weinberger said in The Paradox of the Best Network: “The best network is the hardest one to make money running.”’s home page isn’t a blank Google-esque page with only a search box; it uses the customer’s profile to lead off with recommendations that are not neutral. Any web search result, including on Google, is headlined by paid-for ad links that aren’t “neutral”. In many cases customers even find such bias useful, or at least sufficiently un-intrusive that they don’t go to another supplier.

Wu has an axe to grind: as a customer and as an activist, network neutrality is his top priority. Bias in the network is just bad, even if were to reduce the profits of network providers to the point that they don’t upgrade their networks. If he can’t win the argument by claims to law or ethics, it’s worth his while to persuade the network operators that neutrality is a better business strategy. That’s a sensible goal. However, there’s still a way to go before we can persuade a telco or cableco executive that trying to make money by adding value to their profitless commodity pipe is a bad business strategy.

Wednesday, December 14, 2005

Everyday Patents - Mouthwash

I’ve started looking out for patents on everyday things since I’m thinking a lot about intellectual goods at the moment. It’s easy to imagine that patents should be for big ideas; in fact, they’re usually for very mundane improvements. Since patents leave a bad taste in some people’s mouths, I’ll start with Listerine.

The label on my bottle of CoolMint Listerine discloses two patents: one for the formulation (5,891,422) and one for the design of the bottle (D316,225). I’ll ignore the design patent – who knew that one could patent the shape of a bottle? – since the chemistry is more interesting: the invention is an effective mouthwash formulation that reduces the amount of ethanol, which has to date been a key active ingredient.

Ethanol kills mouth bugs, but the patent application says that “there have been objections to it on heath grounds”. (Since humanity has been getting high on the stuff for millennia via an endless variety of alcoholic beverages, it’s not clear to me what these objections might be – unless The Prohibition Is Back. Perhaps some kids are getting drunk drinking Listerine? Stranger things have happened in the US…) Unfortunately, if you reduce the amount of ethanol, a mouthwash doesn’t work as well. It also doesn’t taste or look as good, because the solubility of other ingredients (like thymol, menthol, and eucalyptol) is reduced

Warner-Lambert’s chemists found that alcohols having 3 to 6 carbon atoms work just as well as ethanol, if not better. (Ethanol has two carbon atoms.) The example given in the patent disclosure is 1-propanol.

Monday, December 05, 2005

Metaphors underpinning intellectual goods

The question of whether knowledge is property leads to many arguments about intellectual goods such as digital music and software. Some argue that property rights will lead to an efficient and productive market in new ideas, while others contend that such “propertization” will damage the creative community and reduce innovation. Property rights are also debated for wireless spectrum. I believe that one can explore these questions in a less loaded setting by considering the notion of “resource” rather than “property”.

The property fight is not a pretty quarrel, since talking about assets conjures up the heroes and villains of the capitalism vs. socialism debate. It’s in a way an argument about the applicability of old metaphors to new ideas; a metaphor like Knowledge Is Property is important us tools with which to analyze a complex problem, but may also lead us astray if the premises are incorrect. [1]

I stumbled over a less loaded concept while reading Lakoff & Johnson’s book about cognitive science, metaphor and philosophy [1]. They define “Resources” in order to explore the Time Is A Resource metaphor. I think it is instructive to explore the Knowledge Is A Resource metaphor. The Knowledge Is Property metaphor is derived from it, and one can use Knowledge Is A Resource to explore our conceptual models in a less loaded setting than when using Knowledge Is Property.

Lakoff & Johnson give the following schema for the concept of a Resource. The schema tries to characterize what is typically meant by a resource – actually, a non-renewable resource.

The Elements of the Schema:

A Resource
The User of the Resource
A Purpose that requires an amount of the Resource
The Value of the Resource
The Value of the Purpose

This Schema is used in the following conceptual scenario:

The User wants to achieve a Purpose.
The Purpose requires an amount of the Resource.
The User has, or acquired the use of, the Resource.

The User used up an amount of the Resource to achieve the Purpose.

The portion of the Resource used is not longer available to the User.
The Value of the Resource used has been lost to the User.
The Value of the Purpose achieved has been gained by the User.

Given this schema, other concepts are defined relative to it: concepts like Scarcity, Efficiency, Waste, and Savings.

Knowledge is a Resource is a commonly used metaphor. It shows up in sentences like:

Knowledge about how best to respond to that problem is scarce. I need to know more before I decide. She used her knowledge effectively to solve the problem. He squandered his education. These business processes extract and save knowledge, and make it available to other employees. Without a doubt the pursuit of knowledge is worthwhile.
These examples indicate that we commonly treat Knowledge as a (non-renewable) resource.

However, knowledge doesn’t fit the Resource schema very well. It is not non-renewable in the same sense that physical resources are; it’s reasonable to assume that there is no limit to human inventiveness. Knowledge isn’t used up to achieve a purpose; knowledge gained by one User isn’t lost by another. The schema breaks down because the action contemplated (“The User used up an amount of the Resource to achieve the Purpose”) doesn’t match the properties of a knowledge resource.

And yet, we use it. I suspect that we generalize from our day-to-day use of knowledge to achieve a purpose, which is a key property of a Resource, to the notion that knowledge also satisfies the other conditions of Resources as we know them. Instinctive use of Knowledge Is A Resource metaphors may thus lead us astray, particularly to the extent that the Resource schema underpins the Property schema.

A similar mechanism is at work when wireless spectrum is treated as property. As Hatfield and Weiser have argued [3], that the application of the metaphor Spectrum is Property is more complex than often portrayed.

They make essentially two arguments: boundaries can’t be drawn objectively, and market manipulation is likely. First, spectrum doesn’t allow for clear boundaries in the way that real property does since radio wave propagation depends on circumstances (making physical boundaries for spectrum allocations problematic), and signals in adjacent frequency bands interfere with each other (confusing efforts to create frequency boundaries). Second, they argue that “if property rights are granted in a manner that would allow injunctions for trespass, it is quite possible that parties could bring actions solely to threaten an injunction and obtain a license along the lines of the much-criticized patent trolls.”

In this case, the Spectrum is a Resource is questionable because of questions over the very definition of the Resource. If Hatfield and Weiser are correct, the Resource definition is arbitrary.

The next step in this work (in progress) is to collect a corpus of metaphors used to describe knowledge goods by the various participants in the debate. I would not be surprised to find that some of the conflicts are based on irreconcilable metaphors, rather than economics. These metaphors will help map out the conceptual systems in play, which may then lead to ways to resolve – or at least make visible – the essential conflicts.


[1] In this post I’m going to treat knowledge, information and intellectual goods as equivalent. They’re clearly not, but I think the analysis below is sufficiently general to work when “information” or “intellectual good” is substituted for “knowledge”.

[2] George Lakoff and Mark Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (1998)

[3] Dale Hatfield and Phil Weiser, Property Rights in Spectrum: Taking the Next Step, SSRN, September 30, 2005

Tuesday, November 29, 2005

The reformation of capitalism

Jim Lewis at CSIS got me thinking about passing the buck, sorry, the torch: to whom could the US hand off its imperial responsibilities when the burden becomes too much bothersome? There aren’t any obvious candidates. We do seem to be living in a multi-polar world of networked power with no inside and no outside, though the United States is the pre-eminent power.

It reminds me a lot of Europe in the late Middle Ages. There were a handful of contending powers, with second-tier sovereigns in loose orbits around them: Spain, England, the Papal States, and the Holy Roman Empire. In spite of the continual wars, Europe was united by the Romans’ legacy of a communications infrastructure: Latin and the roads, which under-pinned a world-wide trading network (world=Europe at that time). The analogies to English, the Internet, and “globalization” are obvious. [1]

The dominant ideology – Catholicism – fractured at the end of the Middle Ages. This suggests that the dominant and largely uniform conception of capitalism of the last few decades could be short-lived. Ideologies evolve in different ways in different cultures, and theories of capitalism are as likely to diverge as to converge. Capitalism’s dominance also means that it will become ground over which disagreements rooted in other areas will be played out, just as arguments over medieval theology were a front for political struggles.

We should expect a radical rethinking of capitalism, of the scale of Adam Smith or Marx, to emerge in the next 10-20 years. This reformation could be adopted by a significant number of power players as part of their geopolitical struggles with the United States. I expect that the Luther or Calvin of this reformation will be Asian, and probably a Chinese who’s in a graduate school class in Shanghai right now.

We should expect the reformation of capitalism [2] to be messy and violent. People will kill each other over physical necessities, but it takes esoteric questions like justification through faith vs. works to bring out their viciousness. As the recent anniversary of the end of the Bosnian war reminds us, neighbors make the most brutal enemies. The nastiest conflicts will not be across oceans (US vs. China, US vs. Europe) but across the back fence: Japan/Korea/China; Germany/Eastern Europe; US/Latin America.


[1] Aside on legitimacy: The contests of the Middle Ages were played out between kingdoms, whereas today’s players are nation states. Kingdoms were geographically dispersed (a duchy here, a land claim by marriage there), whereas today’s countries are compact. However, both built their legitimacy on a concept whose existence wasn’t predicated on daily politics: heredity in the Middle Ages, and territory today. I’m reminded of Antonio Damasio’s insight in The Feeling of What Happens: Body and Emotion in the Making of Consciousness that consciousness is built on a steady stream of “I’m still here” sensations from an organism’s body. The body is the invariant substrate on which consciousness can rely both to deal with an ever-changing environment, and to anchor the perspective from which the environment can be known. Heredity and territory are two viable substrates for a body politic: they continue to exist without having to be maintained, provide a reference for inputs, and anchor a state’s perspective. Any replacement for territory in a new kind of sovereignty will have to meet the same requirements, particularly having a prior basis outside of day-to-day politics.

[2] I could be wrong about the bone of contention being capitalism; it could be “democracy”. Either way, it reminds us that having shared values at a deep level is no guarantee of amity; quite the opposite, in fact.

Wednesday, November 23, 2005

Damasio's theory of consciousness applied to meditation

meditating monk
Meditators aren’t asleep, but they aren’t awake in a conventional sense either. Neuroscience should be able to explain their brain state, and I believe that Antonio Damasio’s theory of consciousness [1] could help. Specifically, I suspect that meditation shuts down extended consciousness, as he defines it, while leaving “core consciousness” intact.

For Damasio, consciousness is the feeling associated with the relationship between a perceived object and the perceiving organism. Damasio argues that consciousness consists of two levels: core consciousness and extended consciousness.

Core consciousness provides the organism with a sense of self about one moment (now) and one place (here). It arises from moment to moment, and is constructed out of the pulses of awareness generated by changes in objects and bodily states. It’s a very simple biological phenomenon, and is not exclusively human; it does not require language.

Extended consciousness provides the organism with an elaborate sense of self that’s based on an extensive memory and a rich sense personal history and anticipated future. It’s wrapped up with an identity and an elaborate sense of self, and is intertwined with language. Extended consciousness is built on core consciousness. Patients with impairments that shut down extended consciousness continue to show core consciousness, but once core consciousness is lost, extended consciousness also disappears.

I suspect some meditation techniques [2] are de-activating extended consciousness, leaving only core consciousness functioning. Many of the topics in meditation practice match Damasio’s description of core consciousness. Practitioners are advised not to verbalize their experience, but simply to be aware of sensations from moment to moment (cf. Damasio’s insistence that core consciousness is pre-linguistic). They are said to become aware that everything is constantly changing (cf. Damasio’s pulses of core consciousness). The sense of a persistent self fades away. However, there is still a sense of consciousness; meditation is not sleep. Hence, in Damasio’s terms, there is still second-order awareness of the relationship between the organism and the sensations it is experiencing.

This hypothesis immediately suggests some questions:
  • Damasio is quite specific about which parts of the brain are responsible for different states of consciousness. One should be able to use fMRI of these regions in experienced meditators to test the hypothesis that meditation shuts down extended consciousness while leaving core conscious functioning.
  • There is a growing body of evidence of the beneficial health effects of meditation. Can one connect differential activation of different kinds of consciousness in Damasio’s model to specific benefits?
  • Do higher states of meditation lead to partial shut-down of core consciousness, in the same way that “introductory” techniques like anapana shut down extended consciousness?


[1] Antonio Damasio, The Feeling of What Happens: Body and Emotion in the Making of Consciousness (2000)

[2] There are many kinds of meditation. I have a rudimentary knowledge of the approach known as insight meditation, aka vipassana. See wikipedia for more: The most obvious candidate for meditation practice that suppresses core consciousness is “anapana”, a kind of tranquility meditation that aims to concentrate the mind.

Monday, November 21, 2005

Half of Americans believe God created human beings in their present form within the last 10,000 years

The resurgent debate over evolution and intelligent design has come as a shock to American intellectuals. Their surprise is a measure of how out of touch they are with mainstream Americans. is a wonderful compendium of poll results of all kinds. They report on a CBS News Poll in October which found that 48% of adults sampled agreed with the statement that "God created human beings in their present form within the last ten thousand years". (Note that the sample size was quite small, and the margin of error is +- 4%.)

Here's how the answers were distributed:
  • 15% -- "Human beings evolved from less advanced life forms over millions of years, and God did not directly guide this process"
  • 29% -- "Human beings evolved from less advanced life forms over millions of years, but God guided this process"
  • 48% -- "God created human beings in their present form within the last ten thousand years "
  • 8% -- Unsure
In a variation of the question, 15% stated that God did not guide the evolution process, 81% believed that God either guided the process or created humans in their present form, and 4% were unsure.

Friday, November 18, 2005

Better Together: private property and commons

Debates about spectrum allocation or intellectual property often appear to demand a choice between private and public ownership. There are zealots on both sides: see e.g. IPcentral and Public Knowledge. Each side argues that its preferred ownership method yields the highest social utility.

I believe that the truth lies in between. It’s more than simply a balance between property and commons; a mixture of the two yields more value than each of them individually. It’s not a question of balance and trade-offs; it’s a matter of synergy and mutual benefit.

Tren Griffin got me thinking about this in the context of spectrum allocation by citing the Central Park example. Central Park in New York City is an incredibly valuable piece of real estate; the nominal land value is astronomical, and the social utility is unquestioned. Its monetary value is due to the valuations of the surrounding apartments – but those apartments are valuable in part because they front on the Park. The combination of park and property is worth more than either all-park, or all-property. I suspect one can make a reasonably robust economic case that a mix licensed and unlicensed spectrum allocations will show the same kind of “mix maximization”.

A similar approach can be applied in other policy areas. Intellectual goods immediately come to mind. Intellectual property can encourage innovation since inventors can be assured of a return, but their creativity is built on a large public domain. Without the public domain there would be less innovation, and what did occur would be more expensive. Conversely, without investment in (temporarily) owned intellectual goods, the public domain would stagnate.

The diagram above represents the argument I’m making. I believe the “synergy” model has a higher maximum than either of the purist’s models, though I don’t know what the shape of the curve is. The economics challenge is to develop models that can handle private and public property on an apples-to-apples basis, and that can represent the mutual value add.

I’ve started thinking about brain-dead models of these phenomena to explore how one might represent the value curves and interactions. Different interaction models will lead to different curve shapes. The goal, of course, is to see if modeling can inform the optimal mix percentage. If there’s a sharp maximum, that would easy to decide; on the other hand, a relatively flat value curve (ie a situation where utility doesn’t depend strongly on the mix of ownership models) would lead to endless argument.

Saturday, November 05, 2005

Adam Smith on journalists vs. blogs and wikis

"Their delightful art is practised by so many rich people for amusement, that little advantage is to be made by those who practise it for profit; because the persons who should naturally be their best customers supply themselves with all their most precious productions. "

Smith is discussing the different levels of rent that one can extract from different kinds of land. He argues here that even though producing vegetables requires quite a lot of skill, other circumstances conspire to reduce the profit from it substantially. The same seems to me to apply to many emerging productive activities where "hobbyists" compete with professionals, e.g. wikinews.

Here's the quote in context:

"In a hop garden, a fruit garden, a kitchen garden, both the rent of the landlord, and the profit of the farmer, are generally greater than in a corn or grass field. But to bring the ground into this condition requires more expense. Hence a greater rent becomes due to the landlord. It requires, too, a more attentive and skilful management. Hence a greater profit becomes due to the farmer. The crop too, at least in the hop and fruit garden, is more precarious. Its price, therefore, besides compensating all occasional losses, must afford something like the profit of insurance. The circumstances of gardeners, generally mean, and always moderate, may satisfy us that their great ingenuity is not commonly over-recompensed. In a hop garden, a fruit garden, a kitchen garden, both the rent of the landlord, and the profit of the farmer, are generally greater than in a corn or grass field. But to bring the ground into this condition requires more expense. Hence a greater rent becomes due to the landlord. It requires, too, a more attentive and skilful management. Hence a greater profit becomes due to the farmer. The crop too, at least in the hop and fruit garden, is more precarious. Its price, therefore, besides compensating all occasional losses, must afford something like the profit of insurance. The circumstances of gardeners, generally mean, and always moderate, may satisfy us that their great ingenuity is not commonly over-recompensed. Their delightful art is practised by so many rich people for amusement, that little advantage is to be made by those who practise it for profit; because the persons who should naturally be their best customers supply themselves with all their most precious productions."

-- Adam Smith, Wealth of Nations (1776), Bk I, Ch. XI, Pt. I,

Thursday, November 03, 2005

Neuroscience and public policy

fMRI image of activation in the primary visual cortex
It’s hard to get one’s mind around abstractions. It is particularly tricky with those ideas that don’t have good equivalents in the physical world. Such concepts, like how to treat digital knowledge, are now at the heart of our culture. Our inability to think about them intuitively means that politicians, citizens and business people have the wrong instincts when trying to solve the problems associated with them.

Lakoff and others argue that we can only understand things to the extent that we can model them on what we can do with out bodies. Here’s an excerpt from New Scientist:
“George Lakoff, a cognitive linguist at the University of California, Berkeley, believes we can only understand infinity based on what we can do with our bodies. More specifically, he says we deal with the headache of infinity by drawing on our familiarity with repetitive and iterative motions - walking, jumping and breathing, for example.”

“We use similarly physical metaphors when discussing abstract concepts such as economic policy: phrases such as "France fell into a recession" or "India is stumbling in its efforts to liberalise", for example. So if our minds grasp abstract concepts of economics in terms of what our bodies can experience, are our bodies also the way we can understand the infinite?”

“Lakoff, Núñez and Narayanan speculated that the structures in the brain that control body movements might also be used to handle all abstract concepts.”

I would love to see experiments that present subjects with mental challenges that have lesser or greater physical analogs to see how the brain deals with them. I suspect that some business, policy or life questions are intrinsically harder to think about than others, and experiement should help us to predict where to be most wary of our innate cognitive inadequacies. There is already a great deal of knowledge about the human failings in making business judgments described by the field of behavioural economics.

I believe intellectual property is one such intrinsically difficult topic, because sharing it doesn’t diminish ownershihp; in economic terms, it’s ‘non-rival’. Digital media have brought us to the point where intellectual property is free of physical wrappers, and their non-rival nature is unavoidable. I suspect that non-rivalrousness is deeply unintuitive because the physically world is intrinsically rival; our bodily metaphors, and thus our ability to think according to Lakoff et al., fail us.

Many of the anguished expectations that activists have about how digital media should behave are associated with the attributes of physical objects: “I’ve bought that album on a vinyl record, the music came off this record, therefore I own the music in the same way I own the record; I bought this album from iTunes, it’s music just like the earlier stuff, I should own it the same way I owned the vinyl record.”

More generally, I suspect our brains have difficulty with the notions of contract, particularly when the objects being traded aren’t physical. A license (“a bundle of rights and obligations between people with regard to things”) is much harder to think about, for me at least, than a trade of physical goods. Problems arise when metaphors are extended beyond their physical basis. In a recent paper, Hatfield and Weiser argue that applying a property rights model to wireless spectrum is much harder than commonly supposed.

I don’t believe the problem lies with abstractions. There are many intangible things that have normal rival properties, like shares in a company or the value of a brand. We are evidently able to think about abstractions like number and money. However, goods that are inexhaustible and infinitely copiable (like software) present problems because they don’t have physical correlates.

There are many things I find hard to grok, but that may just be me; we need experiments to see if any of these difficulties are intrinsic to human nature. Some concepts whose difficulty may be quantifiable by observation:

  • Non-linearity, that is, our inability to grasp the properties of non-linear growth (e.g. the fabled reward requested by the inventor of chess, and Kurzweil passim)
  • Physical interpretations of quantum mechanics, eg the “action at a distance” of the EPR paradox
  • The gains of trade, which arise from the knowledge that one can trade with another

Tuesday, November 01, 2005

Guest post: Changing standards of morality

My post about scientific ideas which might be obsolete in 50 years led S. to speculate on which actions or activities that we now consider moral (i.e., positively good) could be considered immoral in the future.

Here are her comments:

Many of the actions of 19th-century America fell into this category. The first that comes into mind is the cultivation of the wilderness. Slavery was a very minor issue in the Texian revolution against Mexico. Apparently many Americans, even in the non-slave states, considered that slavery was necessary for productive use of Texas, and that this admitted evil was less than the evil of leaving the land fallow. A recent echo of this world-view was the pro-Zionist argument from my elementary-school days: that the Israelis made better use of the land, "making the desert bloom", so they should have it. (Aside: I was not convinced by this. I doubt it was my own independent thought, but rather that in New Zealand, which was underdeveloped and liked it that way, the argument didn't ring true to my teachers.).

A second is the carting off of the American Indians to boarding schools, punishing them for speaking their own language, etc.... (Aside: I imagine we [English] would have punished Gaelic and Welsh speakers in British schools at the same time if we had got around to it.) I tend to believe that the main impulse from which this action was drawn was the desire to give the children a better life in a better civilization. The judgment that one civilization is better than another has gone from universal and unapologetic to partial and often apologetic. I suppose that I hope that eventually it will be commonplace to consider all cultures different but equal, because I hope that in the future the only ghastly places will be in history. If those who attempted to civilize ghastly, or even lesser, places are then condemned, it is probably a small (although sad) price to pay.

A third, older one, is the mortification of the flesh. Often when I luxuriate in my warm and exactly soft-enough bed, I wonder to what extent that sensation would have been counted as - if not exactly evil - at least an unworthy fleshly distraction (as in "the world, the flesh, and the devil"). People now wreck their bodies to become thin, to become athletes, or through overindulgence: you would be sent straight to a psychiatrist if you wanted to wreck your body for the glory of God. (The punishment of heretics to save their souls is obviously related).

Looking at the three activities, I notice that they are all rather hard work, and related to achieving a state of perfection that we no longer believe in. I can therefore hope that working too hard and eating too healthily will be equally frowned on in the future, and that for true and timeless morality I should relax my standards in those areas.

Sunday, October 30, 2005

Catching a tiger by the tail: Patents could destroy the software business

The software business is working itself into a situation where its economics are built on software patents. This will be catastrophic for investors unless such patents are constrained to be of high quality, limited scope and – especially – short duration. If this doesn't happen, licensing costs will dominate the business, and pure software companies will be at the mercy of patent trolls.

The drive to software patents is based on the argument that a knowledge-based industry needs a way to trade knowledge, and thus needs intellectual property. David Kaefer of Microsoft says it well: “Software is built on the shoulders of giants -- no one can build the whole thing. Patents are a property right that allows the innovation to be exchanged”. [1]

This logic is built on the fallacy that the commodities to be traded behave just like the physical goods we’re used to buying and selling. Our models of trade are based on exchanging goods and human services that are exhaustible and rivalrous, because they are essentially tangible. If you give me $20 for the service of cleaning your gutters, you no longer own the money; and I can only clean one client’s gutters at a time. Innovations are knowledge goods. Your use of my idea doesn’t limit another person’s ability to use it, and that idea is infinitely duplicatable. George Bernard Shaw summed it up well:

If you have an apple and I have an apple and we exchange these apples then you
and I will still each have one apple. But if you have an idea and I have an idea
and we exchange these ideas, then each of us will have two ideas.

One may object that the patent system has worked well for centuries; what’s suddenly the problem? Patents have always been on innovations, and thus we have a system that works for non-rivalrous intangibles. What’s changed is that software goods are not wrapped in matter, and it’s matter which is at the root of the metaphor of trade and exchange on which our economy is based. Software is pure ideas, and can be exchanged without being wrapped in stuff. Further, there are far fewer limits on the number of ideas that can be added together than on the number of components that can be added together to make a physical product [2]. I would predict that the number of patents per product increases as the knowledge content of the product increases; there are many more patents entailed in a $100 copy of Windows than in a $100 food processor.

Software is cumulative, as the Y2K experience has shown. Most software products beyond their first release are not only built on relatively ancient algorithms, but in fact are accreted the ancient code that implements those algorithms. Advancing the product is a additive process; little is taken away once it is written. Contrast that with physical products: a new and improved kind of soap can be patented, but one doesn’t have to license all the preceding patents in order to produce the new one. [3]

One of the assumptions implicit in the industry’s world view, and explicit in David Keafer’s contention, is that companies will cross-license with each other to obtain the knowledge inputs that they require without paying for it. This has clearly worked to date; in most industries the participants create license pools or build a web of cross-licenses that enable them all to operate. In the brave new world of knowledge goods, though, everybody is a software company. It will be impossible to draw boundaries around the industry players who’ll need to cross-license with each other, since everybody will be producing software innovations, though most will not sell software. Companies that focus on selling software will be worse off than anyone else – they will have to cross-license with everybody else.

Much has been written about “patent trolls”, those companies who live to invent, and who license their patents to productizers. There are fears that the trolls will extract fearsome tolls from those who want to build products. Their disproportionate power is a consequence of the disproportionate weight that has been given to the inventive step in innovation.

Many in the software industry argue that patents are necessary to guarantee investment in innovation. This only follows if patents incent the most critical step in the process. Let’s unpack the term “innovation”. It consists of the sequential steps of invention, synthesis, execution and distribution, that is, (1) coming up with an idea; (2) composing a number of ideas into a novel product concept; (3) building the product; and (4) putting the product successfully into the market. Patents reward Step 1; however, most of the value is generated further down the chain, in synthesis, execution and distribution. Some might describe patents as the keystone that hold together the arch of innovation. I think that exaggerates the importance of invention. Patents privilege a necessary but by no means sufficient step in the process of bringing new products to market.

If hundreds or thousands of ideas have to be licensed in order to ship a new software release, pure software companies will find that their input costs balloon, especially if the licensors are in industries unrelated or hostile to them, and if inventors are able to charge prices out of proportion to the importance of inventions in the innovation process.

Still: to the extent that patents can be used to anchor innovation transactions, they are useful. However, they are scaffolding rather than a keystone. They can be removed once the structure is complete. Hence, short patent terms are essential. The term of a patent should bear some relation to the rate of change in an industry; the software industry advances rapidly. Short terms are particularly important in software, where the development of a product is cumulative. By all means allow innovators to protect the new icing which makes their cake special; but allow the lower layers to pass into the public domain so that others can also innovate in making icing.


[1] The arms race, Economist, Survey of patents and technology, 22 Oct 2005

[2] There are surely limits to size of a given piece of software, as innumerable software projects who’ve missed their deadlines have demonstrated; Microsoft’s difficulties in shipping the Longhorn release of Windows is one in a long line of examples.

[3] For example, Lever 2000 bar soap is covered by patents 4,695,395 and 4,663,070.

Friday, October 28, 2005

Science today, gone tomorrow

Christopher Ireland asked me a wonderful question a few days ago: “Of all the facts and principles that science currently believes to be true, which do you think are most likely to be disproved in the next 50-100 years?”

There is more up for grabs in the sciences than some people might think. Christopher explains why this matters:
“I'm interested because I believe social behaviors are strongly influenced by
our collective scientific beliefs. There's a lag time (a long one) while the
science filters down to the population, but once it becomes part of people's
general sense of reality, it changes their behavior in subtle, but pervasive
Here are my guesses:

Brain function. There's a lot of work going on here, and it's amazing how little we know. fMRI data is just beginning to be integrated with static imaging, and the scale of analysis is large - patches of brain tissue centimeters across. As the resolution improves, a lot of old ideas will have to be thrown out; they may improve the standard "functional areas" analysis of where how information processing happens.

Cosmology. The standard model of cosmology is creaking, but nobody really knows what to replace it with. "Dark matter" is a symptom; it's essentially a fudge factor in the model invented to make the the universe expand at observed rates. I suspect we'll also see some change in more mundane fields like stellar and galactic models; they're constantly being stressed by new data. The Big Bang model could be discredited in a lot less than fifty years. Who knows, some people are even muttering that Newtonian mechanics is incorrect.

Climate chemistry. There's so much attention here, and there are so many layers of analysis involved (that is, from molecular chemistry to bulk transport at the order of kilometers), that I expect "facts" greenhouse mechanisms to be restated. It wouldn't surprise me at all if other chemicals beyond CO2 and methane turn out to be critical in global warming. I'm not saying that global warming will be found to be incorrect, just that the mechanisms we now assume to be true could well wrong.

Geology. Plate tectonics has stood the test of time but increasingly fine-grained new data could undermine the heuristics that are used to explain earthquakes. We know so little about the dynamics of the mantle, let alone the core, that we might have a very different view of crust activity in a hundred years.

Materials science. This is another multi-scale field, like climate. There is a lot that's not understood between the Angstrom scale of atoms, the nanoscale of new materials, and bulk behavior. It's not been a fashionable field, but it's quite possible that some basic rules of thumb, eg in tribology, will come to be rewritten.

Biological classification. We know almost nothing about bacteria, relative to their importance. For example, the human gut houses 10 to 100 trillion microbes from 500 to 1000 species - more than 10 times the number of cells that make up the human body. The current three domain classification of life (eukaryotes, bacteria and archaea) could well turn out to be wrong.

S., who’s a lapsed physicist just like me, adds these thoughts:

Nutrition - or, more generally, "how to be a fit person". The constant discovery of new trace substances (aspirin, omega-3s, etc.) that you need to be truly healthy suggests the kind of explosion of epicycles that precedes a paradigm shift. I anticipate the discovery that there are multiple different models of a healthy lifestyle, and the "eat fruit and vegetables and lean meat, drink exactly 4oz of red wine a day, and exercise 90 minutes a day" is only one of these. Your model may be a matter of choice, or may be ultimately constrained by ? gut bacteria, level of social interaction, mitochondrial DNA, level of something in the womb, aspect of Saturn at your time of conception...

Quantum mechanics - your basic Schrödinger equation. This is a bit of a cheat on my part because nobody understands it. However, there's a swirl of ideas around the arrow of time, the classical limit of quantum mechanics, and Bell's inequality crying out for a major advance. Were I cleverer, this is what I would be working on.

Tuesday, October 25, 2005

Avoiding Armageddon – Why content companies will turn against Intellectual Property

While the two sides in the intellectual property rights debate often argue at cross purposes, or fulminate about secondary issues [1], they are divided on a fundamental issue: whether scarcity of knowledge goods is desirable or not. The “oligarchs” [2] believe that without scarcity they cannot make money, and that without money there is no incentive to create new knowledge. The “anarchists” believe that culture can only flourish if knowledge is abundant and freely available; innovation doesn’t need incentives, just the oxygen of other ideas.

The oligarchs are winning against the anarchists, and they’re set on the path of “propertization” of all useful knowledge. Rather than being good for their businesses, I believe it will prove catastrophic.

Both sides believe that the growth of knowledge is essential to human well-being. However, the oligarchs rank economics ahead of culture, and the anarchists do the reverse. Their fortified debating positions remind me of nothing as much as Industrialists vs. Environmentalists. The argument about knowledge can be turned on its head, just as Hawken/Lovins [3] and McDonough/Braungart [4] did for the environment by arguing that sustainability was good business.

The oligarchs (or King Content, if you like) seem to believe that information is property which needs to be locked down with DRM in order to have a sustainable economy. They are willing to pay the price of buying all their knowledge inputs in order to achieve this [5]. I don’t believe this makes sense for knowledge goods; unlike physical property, knowledge doesn’t (always) degrade over time, and its price does not decline. Since knowledge is so hard to measure, and since the physical attributes of goods have to date been a viable proxy for their knowledge content, economists have been able to ignore the overwhelming scale of knowledge content in our economy. If it is all turn into traditional property, businesses will find themselves hamstrung beyond imagination.

A substantial knowledge commons may be useful to the anarchists/copyfighters; however, it will be essential to the oligarchs [6]. Once this is realized, content companies will be bigger champions of Fair Use than copyfighters. I'm not arguing that they will swear off the concept of intellectual property, but that they will not follow the logic of "all property is the same" to its ultimate point. (Yes, I know, the title of this post was misleading - but it got you to read this far, didn't it? Sorry.)

Conversely, content anarchists will find themselves demanding protection of their cultural production, and working hard to prevent its incorporation into commercial products. It’s a replay of the “Keep My Software” philosophy that led to the GPL, and one can see it at work in the Share Alike condition in the Creative Commons license. Who knows, some may even learn to Stop Worrying and Love DRM.

After all the dust has settled, we’ll find the players at the opposite sides of the argument from where they began – or, perhaps more hopefully, peacefully coexisting in the middle ground.


[1] Side issues include content companies insisting on total control over all content and all decoding tools at all times, and activists insisting (in an amusingly conservative way) that new content should play anywhere on any of their devices in any way they like, just like the old stuff did. I believe this is secondary because it is an argument over product features, which will evolve as the market matures.

[2] Siva Vaidhyanathan introduced the oligarch/anarchist antithesis in his book The Anarchist in the Library: How the Clash Between Freedom and Control is Hacking the Real World and Crashing the System (2004). His definitions: “Anarchy is a governing system that eschews authority. Oligarchy governs form, through and for authorities.” Most of the players in the debate are neither anarchists nor oligarchs in the strict sense, but they do shade towards the two ends of the spectrum. I believe the distinction is useful starting point, but there are never really only two sides in an argument. Vaidhyanathan argues, rightly I believe, that “[the] question for us in the twenty-first century should not be choosing anarchy or oligarchy but constructing and maintaining systems that discourage both”.

[3] Natural Capitalism: Creating the Next Industrial Revolution (2000) by Paul Hawken, Amory Lovins, L. Hunter Lovins

[4] Cradle to Cradle: Remaking the Way We Make Things (2002) by William McDonough, Michael Braungart

[5] Even a staunch copyfighter like Yochai Benkler has argued that increased property rights will advantage content owners over non-market actors; see his paper The Commons as A Neglected Factor of Information Policy at the 26th Annual Telecommunications Research Conference, Oct. 3-5, 1998. I’m claiming (but have not demonstrated) that his conclusion is based on an under-estimate of the cost of purchasing information inputs. A related threat to content companies is the “patent blackmailer” – small companies that exist merely to develop and patent inventions which they then sell to larger players at a suitably (in)opportune moment.

[6] I will argue in another post that a mix of public and private property increases the value of both.

Friday, October 21, 2005

Peak Oil meets Global Broadband

The Peak Oil theory holds that global oil production will peak soon, and that as production declines, energy prices will increase rapidly. This will lead to major social disruption, including radically more expensive transportation. Let’s stipulate for the purposes of this scenario exercise that the Peak Oil proponents are right.

Activities that require moving physical objects around will be hurt. Shipping a product directly to you from China in a retail FedEx package will be much more expensive than buying a locally produced good. Suburbia, which is premised on cheap gasoline, will crumble, and flying across the country for Thanksgiving will be a thing of the past.

On the other hand, moving information will be essentially free thanks to global broadband networks. The stuff of knowledge work will move easily: phone calls to tech support, medical x-rays, software that needs to be rewritten, contracts to be drawn up and reviewed, and so on.

Our sense of space will be warped: relatively speaking, distances crossed by bits will shrink, and distances crossed by atoms will grow.

This scenario exercise is more than just a good topic for a term paper. It highlights the different dynamics of the two scarcities that lie at the heart of a knowledge economy: energy and brain power.

One implication is “Move, then customize”. Materials will be moved in bulk, ie cheaply, and tailored to the customer as close to them as possible. Micro-manufacturing will become common, and matched to local power generation from non-fossil energy sources. People will keep devices a long time, but upgrade the software often.

There will be marked physical differences between communities, as they produce and consume tangible goods locally. You’ll be able to recognize where someone’s from by how they dress. On the other hand, entertainment and convictions will move more freely where they’re not constrained by local conditions. Expect more culture wars about abstract notions like intellectual property rights and freedom of expression.

Knowledge processing can be easily off-shored, but not the production and manipulation of stuff. Since humans need physical proximity for creativity, places where many brains can be concentrated will have an advantage: Beijing, Bangalore, and the Bay Area, for example.

Second tier cities will suffer, since their best-paying jobs will be knowledge work that will be undercut by off-shoring. For example, Washington State has forecast the occupations with the fastest annual growth rate in the period 2002-2012. Here they are in order of decreasing income (estimated average wages indicated at some inflection points):
Lawyers ($100k)
Computer Software Engineers, Systems Software
Computer Software Engineers, Applications
Computer Programmers
Registered Nurses ($60k)
Computer Support Specialists
Construction Laborers ($35k)
Dental Assistants
Hairdressers, Hairstylists, and Cosmetologists
Security Guards
Landscaping and Groundskeeping Workers ($25k)
Receptionists and Information Clerks
Janitors and Cleaners, Except
Maids and Housekeeping Cleaners
Laborers and Freight, Stock, and Material Movers, Hand ($24k)
Note that the top four are knowledge jobs which could relatively easily be provided at a distance. The jobs that aren’t subject to out-of-region competition are the low-income ones at the bottom of the list.

Wednesday, October 19, 2005

Ten steps to saving the planet

I don't normally simply link to news stories, but Dave Reay's "Your Top Ten ways to take on global warming" in the New Scientist (10 Sep 05) is so good that it deserves mention.

In case the story had been archived behind the subscription wall by the time you click on the link, here's the list in short:
    1. Dress for the weather
    2. Get out of the car
    3. Get into composting
    4. Fly less, especially short haul
    5. Change your driving habits - or better still, your car
    6. Remember the appliance of science
    7. Avoid flatulent and jet-setting food
    8. Learn the 3 Rs
    9. Improve your ethics at work
    10. Go green at the final checkout
The annual household savings for each of these steps is around 1 tonne of CO2. As I blogged a while back, that's what a terabyte of disk storage costs the atmosphere. In the US, every person emits over 20 tonnes of greenhouse gas every year.

Getting their priorities right

To alleviate some the unrelieved seriousness of recent posts, here's news from Europe courtesy of New Scientist:

"A beer mat that knows when a glass is nearly empty and automatically asks for a refill has been created by thirsty researchers in Germany.

"Andreas Butz at the University of Munich and Michael Schmitz from Saarland University came up with the idea while out drinking with their students."

I somehow can't imagine this happening in the research universities of the Puritan States of America.

More from CNN.

Monday, October 17, 2005

Money? No thanks!

Matt Marshall reports on start-ups that don't need VC money:

"Many Internet entrepreneurs don't need the cash, because they're building
products cheaply — using open Web technologies, often with two or three developers. "

"SugarCRM's software was done dirt cheap. Roberts and his small team worked out of their homes, chatted through the night via computer on Yahoo!'s Instant Messenger and met only once a week at a small borrowed office. Within four months, they had launched a test version and had 1,000 people downloading the software."
SugarCRM took the money anyway, because one of their VCs had invested in, and they needed his connections. It worked, it seems; the CEO claims that ten companies have switched away from In this case, VC stood for "Venture Connections".

The traditionally cited "factors of production" are land, labor, and capital (and coordination/entrepreneurship, some say). It's clear that in the case of software, know-how matters more than any of those. One might say that the ability to innovate derives from "human capital", that is, education and training. I'm not yet convinced, since neither the inventive urge nor the common knowledge context in the Internet community has the properties of physical capital. Regardless: the assets needed for knowledge production are scant, and this puts in question the role of providers of such assets.

There is often too much money chasing too few deals in the VC world; as more businesses are primarily knowledge-based, this problem will get worse. The key resource entrepreneurs require will increasingly be just social, not financial, capital; they always needed it in the past, but VCs were able to package it in money and make a $$ return for their investors. This is a case where knowledge is losing its husk of money; I argued in Why you should care about CPCM that digital media are a case where information has lost its husk of matter.

The obverse point is that the barriers to entry are very low. If a newbie and three friends can take customers away from a market leader after four months' work, someone else can do the same to them. If the products are intangible, churn will be great. The barrier to entry will be as intangible as the threat. Brand will matter: the loyalty and trust that users have in a provider who buffers them from too much change.

Sunday, October 16, 2005

Brain Fitness Clubs

For a society that runs on intellect - this is the Information Age, they tell us, and We're All Knowledge Workers Now - it's curious that Brain Fitness Clubs haven't swept the nation. Honing and toning the body is a perennial fashion phenomenon. Yoga is big this year; last year it was Tae Bo. Beautiful Minds only apply to crazy economists, but everybody aspires to a Beautiful Body.

It's medically accepted that "use it or lose it" applies to the mind as well as the body. A regular program of training would improve mental performance just as it helps with physical well-being. It's easy to imagine an Oprah-ready offering; one can re-use all the tricks that health clubs have developed, like fitness assessments, group work-outs, personal trainers, expensive equipment, and a range of activities tailored to every taste. (There won't be much of a market for spandex accessories, though, and the top practitioners won't be very telegenic.) All it needs is a charismatic entrepreneur who's smart, sexy and a super-seller.

Mental fitness is already established in geriatrics, where Early Boomers buy what it takes to stave off the debilities, mental as well as physical, of old age. Cognitive training can reverse cognitive impairment in many seniors.

The mystery is why it hasn't shown up in the mainstream. Perhaps people feel that their work gives them enough mental exercise, and that they couldn't bear to do any more. However, in the days of farm labor, workers would turn to sport over the week-end and dances in the evenings.

Perhaps it's the association with school. Most people hated school, and mental training sounds too much like being back in class.

Perhaps its the belief that one doesn't need to learn how to think. We all know how to think, just like we know how to breathe and walk; we don't need classes on how to walk around, do we? (Actually, we do; cf. the Alexander Technique.) Sure, occasionally we need some one-off training in a new technique, but after that it's just application - right?

Western disinterest in regular training and study with a teacher of one's craft has always perplexed me. It is taken for granted in many Eastern traditions, whether in martial arts (the craft of killing people) or meditation (the craft of cultivating the mind).

Last but not least, it could be that the term "mental health" in fact connotes mental illness with all the stigmas that entails, whereas "physical health" is something that's a good in itself.

Currently, mental fitness is either a private, personal activity (crossword puzzles, sudoku) or a social one grounded in the humanities (book clubs, philosophical discussions).

However, outsourcing and offshoring are rubbing American noses in the prospect of being put out of their job by smart young aliens. It won't be long before a canny entrepreneur figures out how to franchise getting yourself a supple and sexy mind. Some elements of the offering:
  • Meditation to increase concentration and creativity
  • Memory training and competitions
  • Strategy games - remember the old guys on the sidewalk playing chess or go?
  • Mind-body combo activities like combining two senses, or doing complex tasks with your non-dominant hand
  • Equipment - expensive hardware to make you feel your subscription is buying something real, e.g. mind-controlled video games (overview)
  • "Circuits" - a pre-planned sequence of activities from the above list
See you down Sand Hill Road!

Thursday, October 13, 2005

Buying, finding, friends and fun

EBay’s acquisition of Verisign’s payment processing business takes another chip out of traditional banks’ defenses. It seems to be more than just a PR alliance, given the purchase and the synergy with Paypal.

I’m intrigued to see a market maker becoming a banker. A market-maker is often has to be a guarantor of liquidity, and so it’s in a good position to clear payments too. I can’t think of historical precedents, but then I don’t know much financial history. None of the traditional banks have bought traditional exchanges (or vice versa) as far as I know.

This is part of the “horizontalizing” of computing infrastructure driven by the web architecture. It’s no longer a vertical layering of platforms and apps, but a web of adjacent functional categories. Two obvious centers of gravity:

  • How you find stuff on the web: lead competitors Google and Yahoo. Key assets: search technology and page view monetization.
  • How you buy stuff: eBay and Amazon. Key assets: inventory and user-produced content.
Then there are two more which are more debatable:
  • The “how you entertain yourself” category: Yahoo, Apple, Real. The assets are media deals and user experience.

  • The “how you hang out with other people” category. I lump all communications in here from IM and Skype to Friendster and MSN Spaces; the key assets are reputation and buddy lists. It’s not clear who the leaders are here; in IM it’s AOL vs Microsoft and Yahoo.

Application programming interfaces will be harder to monopolize in this scenario than in the vertical one. Since the major categories are webbed together, there are more interface paths to control than simply apps-to-OS. The lead players in one constellation are often strong in others; for example, Yahoo is #1 in content and #2 in search. The Number Two player will always be willing to give away functionality in when they’re not on home turf in order to undermine the leader; in turn, they'll be undermined by that leader encroaching on their turf. The end result will be a two or three confederacies that span multiple centers of gravity, eg Google-AOL-Real vs. Yahoo-Amazon vs. eBay-Microsoft.

The value for developers will be in bridging between the centers of gravity. The classic case of in fact sits between the “find stuff” and “buy stuff” constellations (Google and Craig’s List, respectively). They will have to deal with multiple possible combinations. It’s likely that individual developers will align with one confederacy, though their loyalty will shift as the constellations evolve. This is a good opportunity for tools makers, but inefficient for developers and frustrating for those with dreams of market dominance.

eBay becoming a banker signals the maturity of the web, and the likelihood of more regulation. The main players in regulating the internet in the US have been the FCC and the FTC, and it’s mostly been about communications (VOIP, law enforcement access, spam). The eBay action begins to lure in the Federal Reserve. International regulators will get even more involved once money flows across borders. Content and communications are pretty important, which is why governments are making a play for ICANN, but money is where the action really is.

As always, it’s striking that all the names are American (all the more so now that Skype has been swallowed by eBay). Asia and Europe are now on a par with the US on internet use (stats), and the mystery of where their champions are becomes more striking by the day. I guess it’s evidence of the USA’s first mover advantage coupled with a system that fosters entrepreneurialism. The players who are likely to use regulation against the market leader are probably going to be countries, not companies (cf. again the fight over ICANN). The one place where market innovation with global impact is likely to arise is China. I’m intrigued by the China/Singapore combo. They’re both totalitarian market economies; China has the audience, and Singapore has the lead in web sophistication. The surprise is South Korea. While it’s a leader in social adoption of the broadband web, its problem is that industrial policy is strongly biased towards advantaging CE manufacturers; software is just a way of selling boxes.

Wednesday, October 12, 2005

Now we just need a charity bracelet

Northwest Tree Octopus logo
... a nice puce-colored one with little suckers.

If the endless multiplication of memorial ribbons is getting you down, or if the lobbyization of American politics is driving you nuts, the campaign to save the Pacific Northwest Tree Octopus may just be the one that takes you over the top.

Link (thanks to S.)