Saturday, December 31, 2005
According to elementary economics texts, the raw material for any productive activity can be put in one of three categories: land (raw materials, in general), labor, and capital. Some economists mention entrepreneurship as a fourth factor – but none talk about knowledge. This is strange since know-how is the key determinant for the most important kind part of output: increased production. Still, it’s not that strange, since knowledge has unusual properties: there is no metric for it, and one can’t calculate a monetary rate for it (cf. $/acre for land).
An example from agriculture
Imagine that you are a crop farmer. Your inputs are land and other raw materials like fertilizer and seed; your labor in planting, cultivating and harvesting the crop; and money you’ve borrowed from the bank to pay for your tractor. You can increase output by increasing any of these factors: cultivating more land, working more hours, or borrowing money to buy better tractor or better seed.
However, you can also increase output through know-how. For example, you might discover that your land is better suited to one kind of corn rather than another. You could make a more substantial improvement in output if you changed your practices, for example by implementing crop rotation. Farmers in Europe had practiced a three-year rotation since the Middle Ages: rye or winter wheat, followed by spring oats or barley, then letting the soil rest (fallow) during the third stage. Four-field rotation (wheat, barley, turnips, and clover; no fallow) was a key development in the British Agricultural Revolution in the 18th Century. This system removed the need for a fallow period and allowed livestock to be bred year-round. (I suspect that if a four-crop rotation had been invented now, it would be eligible for a business process patent.)
Most of the increases in our material well-being have come about through innovation, that is, the application of knowledge. How is it, then, that knowledge as a factor of production gets such a cursory treatment in traditional economics?
A key difficulty is that knowledge is easy to describe but very hard to measure. One can talk about uses of knowledge, but I have so far found no simple metric.
It’s even hard to measure information content. There are many different perspectives, such as: library science (eg a user-centered measure of information); information theory (measuring data channel capacity); and algorithmic complexity (eg Kolmogorov complexity). All give different results.
One can always, of course, argue that money is the ultimate metric: the knowledge value of something is what someone will pay for it. However, this is true for anything, including all the other factors of production. The difference is that land, labor and capital all have an underlying “objective” measure. One cannot calculate a $/something rate for knowledge in the way one can for the other three.
Let’s say land is measured in acres and labor in hours, and money in dollars. You’ll pay me so much per acre of land, so much per hour of labor, and so many cents of interest per dollar I loan you. Land in different locations, labor of different kinds, and loans of different risks will earn different payment rates.
Knowledge does have some value when it’s sold, e.g. when a patent is licensed or when a list of customer names is valued on a balance sheet. However, there’s no rate, no “$/something” for the knowledge purchased. That suggests that the underlying concept is indefinite. It is perhaps so indefinite that we are fooling ourselves by even imagining that it exists.
Treating knowledge as a physical object
The Knowing Is Seeing metaphor is pervasive and intuitive, and an essential part of our philosophical toolkit via Descartes’ work (see Ch.19 of Lakoff and Johnson’s Philosophy in the Flesh (1999) for a detailed discussion). The metaphor treats an Idea as an Object Seen, Knowing as Seeing, Knowing an Idea as Seeing an Object, etc. The catch is that the objects that we think with in this metaphor are, necessarily, objects – and physical objects don’t behave at all like ideas. Most importantly (regular readers will have seen this coming a mile off, and winced at the prospect of the deceased pony being thrashed again), objects are rival, and ideas are non-rival.
This little problem has always been present in economics, but it hasn’t been critical since knowledge has always been wrapped in stuff. Until the advent of software packages like TurboTax, one bought expert advice by the hour from a person. The advice was intangible and non-rivalrous, but its carrier wasn’t; if I was using the accountant’s time, you were getting less of it. However, knowledge embodied as software is “doubly non-rival”; not only is its knowledge content non-rival, but the software itself is too: my use of TurboTax doesn’t diminish your ability to use your copy.
The bottom line
As the economy is increasingly built out of knowledge, and as the absolute cost of physical goods continues to drop, we are effectively sloughing the husk of stuff off the knowledge we depend on. Managing our way into the future effectively is forcing us to think more keenly about knowledge as a factor of production.
To use ten dollar words: the intersection of epistemology and economics is a necessary and fruitful area of research as the knowledge economy grows in importance
Friday, December 30, 2005
My tai chi teacher Yang Jun said last night that you should ‘join with your opponent’ to respond to a punch. He showed how hard it is to meet a punch head-on; your timing has to be very good, and you have to yield in just the right way to absorb the force and turn it away without hurting yourself. (He could do it easily, of course.) It’s better to swing your arm down across the direction of the strike, like a propeller in front of your body. Once you make contact, your arm naturally spirals around your opponent’s forearm, swinging it out of the way.
Master Yang explained that the philosophy of ‘joining with your opponent’ before attacking was part of Chinese culture. It’s the yin/yang philosophy: if you want to push, start by pulling; if you want to go horizontally, start in the vertical. This attitude is deeply ingrained in tai chi, which is a ‘soft’ martial art; one of its guiding images is that the energy of a master is like ‘steel wrapped in cotton’.
These ideas seemed applicable to the long-term geopolitical contest between the
Monday, December 26, 2005
I originally formulated it in this way:
When a corporate executive makes a statement under pressure, pre-pending “Not” to any statement will get you closer to the truth that what is said.
It was designed to apply to statements like these: I am very confident about the prospects for our company in the coming year. I am excited about the challenges that lie ahead. There is no doubt that our product blows the competition away. This area represents a massive opportunity. The entire organization is connected by a common vision. The team is excited and engaged, and morale has never been better.
Of course, it applies more generally to leaders (and the rest of us…), as in statements like:
Peace in our time.
There is no alternative.
I am not a crook
I did not have sex with that woman.
George Bush II, who prides loyalty above all else, was the exception to this rule when he expressed his complete confidence in Donald Rumsfeld some months ago. Contrary to political custom, Rumsfeld was not fired shortly afterwards.
The Not Rule also applies to claims made about social trends. Because trends take so long to surface, they’re usually moot by the time debate about them becomes contentious. For example, the hue and cry over the lack of respect being paid to the Christian faith in America today obscures the fact that it wouldn’t be happening if Christianity were not so powerful. Likewise, breathless columnists are coming out in a cold sweat about jobs to be lost to India and China at exactly the time when those actually doing off-shoring have concluded that the process will take longer than expected.
To close, here’s a statement of my own: I am not a contrarian.
Brin appears as the inventor on three Google patents. His co-founder Larry Page is the inventor on patents 6,285,999 (“Method for node ranking in a linked database”, filed 9 Jan 1998) and 6,799,176 (“Method for scoring documents in a linked database”, filed 6 July 2001), but these patents have been assigned to Stanford.
I wonder how much Google is paying Stanford in licensing fees? It can't be much... Stanford's 2005/2006 Consolidated Budget shows $263 million in Other Income; I assume patent licensing fees are in there. The 2004 Annual Report notes that special program fees and other income were $259 million in FY04; this includes technology licensing as well as service centers, executive education and other programs.
It’s curious that Page’s Stanford patent 6,799,176 was filed about eight months after the first Google patent 6,678,681 by Brin.
Most of the patents are for search and query technologies, as one might expect. However, three are for hardware designs: Cooling baffle and fan mount apparatus , Cable management for rack mounted computing system , and Drive cooling baffle. Just in case anyone was in doubt, here's evidence that running a large data center is a key competency for Google.
Our intuitions are grounded in how our brains use our bodies to interact with the physical world. Software confounds those intuitions because it’s doubly inexhaustible: it’s made up of ideas which can’t be “used up”, and the resulting product is itself perfectly copiable infinitely many times. Both the input and the output of manufacturing software is non-rivalrous, to use the economic jargon.
As we build a knowledge economy, we are surrounding ourselves with abstractions for which our body-based reasoning is ill-prepared. Examples beyond software include quantum mechanics, persistent exponential growth (eg Moore’s Law for silicon chips) and products built on pure probability (eg futures markets, and lotteries in general). Not all of this is novel, though. Laws, lotteries and logic have been around for millennia. However, people at large have not had to worry about their weirdness because they have only been parochial concerns to date. The pervasiveness of software can open our eyes – especially if we’re geeks and not wonks – to some of the curious properties of law.
One can think of the legal code as the operating system for a country. If the laws are the operating system, then contracts are the applications. There are many more contract lawyers that lobbyists, just as there are many more applications than operating systems.
The amount of code in a software program can be measured by counting the number of lines of source code, that is, the number of lines of human-readable instructions. Contemporary operating systems contain tens of millions of lines of code (Wikipedia cites line counts for some common operating systems).
I was surprised when I totaled up the number of lines in the US Code, the compendium of all the (federal) laws of the United States: about 5 million lines (spreadsheet). That’s about the same size as Windows NT or the Linux 2.6.0 kernel, at 4 million and 6 million lines of source code, respectively.
The “core development team” for the US Code is rather smaller than that for Windows or Linux, which are both said to be in the region of 8,000 people. The Washington DC legislature consists of 50 senators, 400-odd members of the House, and their legislative staff. If we assume a member to staff ratio of 1:3, that’s a team of 1,800 “developers”. Of course, one can’t forget the lobbyists, many of whom are lawyers who do the actual legislative drafting. Roberta Baskin, Executive Director of The Center for Public Integrity estimates that the federal lobbying industry employs about 14,000 people to influence the decisions of Congress, the White House, and officials at more than 200 federal agencies. Not all 14,000 are working on the US Code; many are working on agency regulations, which geeks might want to think of as the “middleware” of the legislative machine. (Note that I’ve ignored state law and local regulations in this approximation; it shouldn’t change the answer by more than about a factor of 2.) In all, the number of people writing the operating system for the United States is approximately the same size as the teams working on PC operating systems.
The analogy offers endless opportunities for harmless fun and mischievous comparisons.
Developers and lawyers quite similar: both write code, both worry about misplaced punctuation marks that could ruin everything, and both spend a lot of time on “edge cases”. Neither has ever seen a piece of code that they couldn’t do better, and both spend more time maintaining and tweaking legacy code than writing new stuff. However, it may take a little while for the maintenance of the US Code to be off-shored to India…
Legislation is infested with inconsistency; software tools that track links between code modules could help find discrepancies. S. remembers that her family was perplexed by what to do about an old tree in their garden. One regulation insisted that they cut it down, because it was old and rotten, and another insisted that it be protected, because it was just plain old. (They cut it down.) On the other hand, while tools can find buffer overflows in software, one needs the CBO to find budget overflows since legislation is code which is designed to run in the future, and have its worst side-effects when its drafters have happily retired to working as lobbyists.
One could see most of the activity in national and state capitals as the frantic “patching” of unintended side-effects in legal code. Tax lawyers seeking loopholes and hackers looking for trapdoors have similar goals – making the code do something it was not designed for. Unfortunately, it takes rather longer to patch the legal code than it does to issue a security update.
The judicial system is the “execution environment” for the code the makes up the code for a country. (In country as enamored of the death penalty as the United States, that computing term is more accurate than one might wish.) The courts figure out what the legal code actually does in practice. The function of the courts highlights a weakness in my analogy: laws are written in ordinary language with all its delightful vagueness, whereas computer code is written in mathematical symbols dressed up to look like language. In software, ambiguity is a bug; in law, it’s often a feature.
Saturday, December 24, 2005
Ernő Rubik invented a variety of rotating cube toys in mid-1970s. He was a Hungarian sculptor and professor of architecture with an interest in geometry and 3D forms. According to a Wikipedia article, Rubik obtained Hungarian patent HU170062 for the Magic Cube in 1975, but did not take out international patents.
The US Patent Office shows a series of patents by Rubik filed in 1984, referring to earlier filings in Hungary:
- 5,184,822 “Three-dimensional puzzle” (1993, Hungary 1983): single cube with holes in side
- 4,471,959 “Logical toy” (1984, Hungary 1980): Horizontal pushers in two layers
- 4,410,179 “Shiftable element puzzle” (1983, Hungary 1977): Cylindrical puzzle, with two layers of six petals
- 4,392,323 “Toy with turnable elements for forming geometric shapes” (1983, Hungary 1980): One-dimensional chain of triangular pieces
- 4,378,117 “Spatial logical toy” (1983, Hungary 1980): Various 2x2x2 arrangements
- 4,378,116 “Spatial logical toy” (1983, Hungary 1978): A two-layer puzzle, with 3x3 cells in each layer
Larry Nichols received US patent 3,655,201 in 1972 for a “pattern forming puzzle and method with pieces rotatable in groups”. The filing concentrates on a 2x2x2 design, but the drawings show larger compositions. The thing is held together with magnets. According to Wikipedia, Ideal Toys lost a patent infringement suit based on this patent in 1984.
Terutoshi Ishigi acquired Japanese patent JP558192 for a nearly identical mechanism while Rubik's patent was being processed, but Ishigi is generally credited with an independent reinvention.
Friday, December 16, 2005
Tim Wu’s opinion piece in CNET is an admirable attempt at extending the case for network neutrality. He argues that BellSouth’s hope to charge companies that want their sites to load faster than those of a rival isn’t illegal or immoral, but stupid. It’s an important step to finding the win-win-win-win for network operators, consumers, established app/content providers, and new app/content providers.
This is a welcome new line of reasoning from the “open access” camp, but it’s not persuasive yet. While it’s true that companies sometimes do things their customers hate, it’s typically by accident and not on purpose. No business can afford to alienate its customers over the long run; companies will do things that irritate some of their customers some of the time, but usually as part of a conscious trade-off.
Buyers and sellers are always engaged in a tussle: vendors want to sell for as much as possible, and customers want to buy as cheaply as possible. In the end, if they decide to do business, both settle for less than they’d like but more than they’d otherwise get. Both sellers and buyers make trade-offs, and a trade-off that one (class of) customer dislikes isn’t necessarily bad business overall.
Wu argues that BellSouth’s model, that is, trying to add value to its pipes by privileging some traffic flows over others, “neglects the market values of neutrality and consumer choice”. I’m not persuaded that the likes of BellSouth “err by thinking that their customers want their services, as opposed to better access to an open market.” Only policy wonks worry about “access to open markets”. Most consumers just want products at the lowest price for the highest quality. Open markets often provide this, but are a means to an end for consumers, not an end in itself. If they can get a better product for no additional cost – if, say, Real Networks paid BellSouth a premium to ensure that a Rhapsody media stream gets Platinum Tier treatment even though a customer has only paid for Silver Tier network performance – the consumer will take it.
Wu argues that neutral products and neutral networks are usually more valuable to customers, but neglects to explain how a company should balance this with the fact that such neutral systems are usually less valuable to sellers. As Isenberg and Weinberger said in The Paradox of the Best Network: “The best network is the hardest one to make money running.” Amazon.com’s home page isn’t a blank Google-esque page with only a search box; it uses the customer’s profile to lead off with recommendations that are not neutral. Any web search result, including on Google, is headlined by paid-for ad links that aren’t “neutral”. In many cases customers even find such bias useful, or at least sufficiently un-intrusive that they don’t go to another supplier.
Wu has an axe to grind: as a customer and as an activist, network neutrality is his top priority. Bias in the network is just bad, even if were to reduce the profits of network providers to the point that they don’t upgrade their networks. If he can’t win the argument by claims to law or ethics, it’s worth his while to persuade the network operators that neutrality is a better business strategy. That’s a sensible goal. However, there’s still a way to go before we can persuade a telco or cableco executive that trying to make money by adding value to their profitless commodity pipe is a bad business strategy.
Wednesday, December 14, 2005
I’ve started looking out for patents on everyday things since I’m thinking a lot about intellectual goods at the moment. It’s easy to imagine that patents should be for big ideas; in fact, they’re usually for very mundane improvements. Since patents leave a bad taste in some people’s mouths, I’ll start with Listerine.
The label on my bottle of CoolMint Listerine discloses two patents: one for the formulation (5,891,422) and one for the design of the bottle (D316,225). I’ll ignore the design patent – who knew that one could patent the shape of a bottle? – since the chemistry is more interesting: the invention is an effective mouthwash formulation that reduces the amount of ethanol, which has to date been a key active ingredient.
Ethanol kills mouth bugs, but the patent application says that “there have been objections to it on heath grounds”. (Since humanity has been getting high on the stuff for millennia via an endless variety of alcoholic beverages, it’s not clear to me what these objections might be – unless The Prohibition Is Back. Perhaps some kids are getting drunk drinking Listerine? Stranger things have happened in the US…) Unfortunately, if you reduce the amount of ethanol, a mouthwash doesn’t work as well. It also doesn’t taste or look as good, because the solubility of other ingredients (like thymol, menthol, and eucalyptol) is reduced
Warner-Lambert’s chemists found that alcohols having 3 to 6 carbon atoms work just as well as ethanol, if not better. (Ethanol has two carbon atoms.) The example given in the patent disclosure is 1-propanol.
Monday, December 05, 2005
The property fight is not a pretty quarrel, since talking about assets conjures up the heroes and villains of the capitalism vs. socialism debate. It’s in a way an argument about the applicability of old metaphors to new ideas; a metaphor like Knowledge Is Property is important us tools with which to analyze a complex problem, but may also lead us astray if the premises are incorrect. 
I stumbled over a less loaded concept while reading Lakoff & Johnson’s book about cognitive science, metaphor and philosophy . They define “Resources” in order to explore the Time Is A Resource metaphor. I think it is instructive to explore the Knowledge Is A Resource metaphor. The Knowledge Is Property metaphor is derived from it, and one can use Knowledge Is A Resource to explore our conceptual models in a less loaded setting than when using Knowledge Is Property.
Lakoff & Johnson give the following schema for the concept of a Resource. The schema tries to characterize what is typically meant by a resource – actually, a non-renewable resource.
The Elements of the Schema:
The User of the Resource
A Purpose that requires an amount of the Resource
The Value of the Resource
The Value of the Purpose
This Schema is used in the following conceptual scenario:
The User wants to achieve a Purpose.
The Purpose requires an amount of the Resource.
The User has, or acquired the use of, the Resource.
The User used up an amount of the Resource to achieve the Purpose.
The portion of the Resource used is not longer available to the User.
The Value of the Resource used has been lost to the User.
The Value of the Purpose achieved has been gained by the User.
Given this schema, other concepts are defined relative to it: concepts like Scarcity, Efficiency, Waste, and Savings.
Knowledge is a Resource is a commonly used metaphor. It shows up in sentences like:
Knowledge about how best to respond to that problem is scarce. I need to know more before I decide. She used her knowledge effectively to solve the problem. He squandered his education. These business processes extract and save knowledge, and make it available to other employees. Without a doubt the pursuit of knowledge is worthwhile.These examples indicate that we commonly treat Knowledge as a (non-renewable) resource.
However, knowledge doesn’t fit the Resource schema very well. It is not non-renewable in the same sense that physical resources are; it’s reasonable to assume that there is no limit to human inventiveness. Knowledge isn’t used up to achieve a purpose; knowledge gained by one User isn’t lost by another. The schema breaks down because the action contemplated (“The User used up an amount of the Resource to achieve the Purpose”) doesn’t match the properties of a knowledge resource.
And yet, we use it. I suspect that we generalize from our day-to-day use of knowledge to achieve a purpose, which is a key property of a Resource, to the notion that knowledge also satisfies the other conditions of Resources as we know them. Instinctive use of Knowledge Is A Resource metaphors may thus lead us astray, particularly to the extent that the Resource schema underpins the Property schema.
A similar mechanism is at work when wireless spectrum is treated as property. As Hatfield and Weiser have argued , that the application of the metaphor Spectrum is Property is more complex than often portrayed.
They make essentially two arguments: boundaries can’t be drawn objectively, and market manipulation is likely. First, spectrum doesn’t allow for clear boundaries in the way that real property does since radio wave propagation depends on circumstances (making physical boundaries for spectrum allocations problematic), and signals in adjacent frequency bands interfere with each other (confusing efforts to create frequency boundaries). Second, they argue that “if property rights are granted in a manner that would allow injunctions for trespass, it is quite possible that parties could bring actions solely to threaten an injunction and obtain a license along the lines of the much-criticized patent trolls.”
In this case, the Spectrum is a Resource is questionable because of questions over the very definition of the Resource. If Hatfield and Weiser are correct, the Resource definition is arbitrary.
The next step in this work (in progress) is to collect a corpus of metaphors used to describe knowledge goods by the various participants in the debate. I would not be surprised to find that some of the conflicts are based on irreconcilable metaphors, rather than economics. These metaphors will help map out the conceptual systems in play, which may then lead to ways to resolve – or at least make visible – the essential conflicts.
 In this post I’m going to treat knowledge, information and intellectual goods as equivalent. They’re clearly not, but I think the analysis below is sufficiently general to work when “information” or “intellectual good” is substituted for “knowledge”.
 George Lakoff and Mark Johnson, Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (1998)
 Dale Hatfield and Phil Weiser, Property Rights in Spectrum: Taking the Next Step, SSRN, September 30, 2005