Wednesday, August 02, 2006

An endless vista

After yesterday’s bad news about Microsoft’s next operating system (Vista testers to Microsoft: Even the bugs aren't stable yet), I’m beginning to think the unthinkable: perhaps Vista will never ship.

Oh sure, something called Vista will ship next year; marketing and licensing imperatives demand it. And sure, it will be but a pale shadow of the vision when the project started; no product ever makes it out the door will all the intended features. I’m beginning to doubt, however, that the platform that ships will be what Microsoft needs it to be. It could become the Big Dig of Redmond. Even though it is eventually finished at vast over-runs, major flaws will continue to appear throughout its life.

There was relief at senior levels in Microsoft when Windows XP shipped. It wasn’t clear, even then, whether a substantial upgrade to such a complex product could be accomplished. Many years have passed since then, and many demands have been added to the wish-list. The code base is a hairball, the result as much of business decisions to lock in customers and preclude anti-trust action as of engineering philosophy. It is so large, interconnected, and poorly documented that any upgrade is a mind-boggling feat.

The important question, though, is not whether and how Vista ships; it’s what happens next. My impression is that the Windows NT code base, as built, is too complex to be the basis for substantial growth. However, there is no alternative in the wings. The NT code base, on which XP and Vista are built, has run out of steam sooner than planned. There is no code which is to Windows NT, as NT was to Windows 95.

The catch is that Microsoft doesn’t have the means to start from scratch, even if it had the time. It doesn’t have the resources to fight a two-front war. Wall Street balked at the investment that it’s making to match Google; it cannot now spend yet another $2 billion to replace its core operating system.

The twilight of the operating system as the engine for innovation may come sooner than I expected. The company may quietly decide – Steven Sinofsky and Steve Ballmer may already have decided – that future OS upgrades will be incremental rather than substantial. Attention is shifting to the network, and hosted applications. Microsoft will be fighting on Google’s ground.

Monday, July 31, 2006

Mental models for wireless spectrum

Technologist David Reed once said, “There's no scarcity of spectrum any more than there's a scarcity of the color green." [1] This quip presumes the technically correct meaning of spectrum as a range of vibration frequencies of electromagnetic waves. However, it’s clear that when most people talk about spectrum, they don’t mean a vibration frequency. What do they mean?

I've been working through examples of how lay people conceive of spectrum, and talk about spectrum policy. An early draft (Word doc; source data in spreadsheet form) explains why some policies and proposals make sense to us, and other things don’t. I’m not making claims about how experts think, though I suspect that the metaphors I’ll describe is at the root of their thinking, too.

In summary, spectrum is conceived of as a spatial resource, with two common variants: spectrum as a set of containers (bands), and spectrum as land. There are two common mental models of wireless signals: as objects moving through space, and as sounds, particularly speech. This leads to two mental models for interference, which entail conflicting, and sometimes incorrect, deductions.

Spectrum, as the concept is treated by regulators and politicians, is a resource used for communication which is, in the first instance, under state control. Its assignment is thus the stuff of politics, that is, arguments over the distribution of scarce resources. The spectrum-as-land model is “natural” to most people because the underlying spatial metaphor, of real estate in particular, fits our notion of land resources.

The results of this analysis can be used to identify policy-making pitfalls. For example, Hatfield & Weiser [2] explain why the transition to a property rights model for spectrum is far more complex than commonly portrayed; this work hopes to explain why a model of real property rights is attractive in the first place.

---------

[1] Quoted by David Weinberger in The myth of interference, Salon 12 March 2003. Curiously, if one considers electromagnetic radiation in optical fiber, there is indeed “a scarcity of the color green” because each fiber supports a finite number of wavelengths. For example, links in the National LambdaRail network use dense wavelength-division multiplexing (DWDM), which allows up to 32 or 40 individual optical wavelengths to be used (depending on hardware configuration at each end). Once those wavelengths are occupied, no more are available.

[2] Hatfield, Dale N and Philip J Weiser (2006), “Property Rights in Spectrum: Taking the Next Step,” University of Colorado Law School, Paper Number 06-20, June 2006, http://ssrn.com/abstract=818624

Sunday, July 16, 2006

Ted’s Tubes and Larry’s Lanes

The digerati are having a good snigger at Ted Stevens, Chairman of the US Senate committee that’s deciding how the Internet will be regulated. The Daily Show recently lampooned (video clip) his attempts at explaining network neutrality. Senator Stevens said, to much derisive merriment among the net-savvy studio audience:
“The Internet is not something that you just dump something on. It’s not a big truck. It’s, it’s a series of tubes. [...] And if you don’t understand that those tubes can be filled, and if they’re filled when you put your message in it, it gets in line, it’s gonna be delayed by anyone who puts in that tube enormous amounts of material, enormous amounts of material.”

It was amusing because homespun analogies seem out of place coming from a person deciding our high tech future. Ted got a bad rap, though, because no-one can avoid this kind of language. Cognitive science suggests that we have no choice but to use mental models based on the tangible world to reason about intangible things like interpersonal relationships (“we’ve been close for years, but we’re drifting apart”), mathematical abstractions (“the real numbers are points on a line”), and the Internet.

Everyone in this debate misuses metaphor, including Larry Lessig, the house theoretician for network neutrality. An op-ed he wrote for the Washington Post last month with Robert McChesney was premised on an extend (and well-worn) metaphor: The Internet-as-Highway. Some excerpts:

“Congress [will decide whether cable and phone companies] can put toll booths at every on-ramp and exit on the information superhighway. [...] Net neutrality means simply that all like Internet content must be treated alike and move at the same speed over the network. [... Those companies] would be able to sell access to the express lane to deep-pocketed corporations and relegate everyone else to the digital equivalent of a winding dirt road.”

Sen. Stevens uses the Internet-as-Pipes metaphor, and Prof. Lessig prefers Internet-as-Roads. There’s little to choose between them. Both convey some truth, and both have shortcomings.

The superhighway metaphor is inaccurate in that the networks making up the Internet are owned by private agents, whereas most of the highway network is owned by the state. The notion of “speed” is also technically inaccurate: all packets on the Internet move the same speed (some fraction of the speed of light). More packets get through per second on some parts of the network, but because “there are more parallel lanes” rather than because they speed along faster. The Highway metaphor also implies that two distinct Internets will be created side-by-side (echoes of Separate But (Un)equal) , whereas in fact all traffic will move over the same infrastructure, but be prioritized differently.

The choice of analogy has consequences, though. One could mangle poor McLuhan again by saying that the Metaphor is the Message. We think of the highways as a public good, provided by the state for the benefit of all, where everyone is entitled to equal access. This resonates with those on the Left, like Prof. Lessig. The ownership of tubes and pipes varies, but is open to notions of private property and investment, which is agreeable to those on the Right, like Sen. Stevens. Plumbing is invisible, and can safely be left to experts to worry about, whereas roads are something we all feel we have a daily stake in.

The highway metaphor is the more common one in the debate to date. It is perhaps more intelligible; we have more experience with roads than plumbing. However, one shouldn’t forget that it was forged during the last rewrite of the telecom act, which happened during a Democratic administration. I wouldn’t be surprised to see the Right experimenting with alternative metaphors which are better at connoting their values and agenda – plumbing, perhaps, or airlines.

Wednesday, July 12, 2006

Follow the clicks

Spyware is the current BusinessWeek cover story, The Plot to Hijack Your Computer. The technology is now officially mainstream.

Spyware, or “adware” in the terminology of its proponents, figures out our preferences by tracking what we do on the web, and then presents us with tailored pop-up ads. It got a bad name because the software often installs without a user knowing about it, monitors user behavior and relays it back to base, and sometimes disables PCs in the course of trying to disable competing spyware programs.

However, spyware/adware part of the future personal computing because it’s a way to make the dream of “ad supported software” come true. BusinessWeek reports that a company with access to 10 million computers can make about $100,000 a day; that’s 1c/day/computer, or $3/year. According to Om Malik writing for Business 2.0 magazine, Google makes around $16 per user per year in advertising; another $3 would be a 30% increase.

Spyware will be tamed over the next few years, and its technologies incorporated into Yahooglesoft products. If Yahoo, Google and Microsoft were as savvy about regulatory politics as the phone companies, they’d be in Washington DC and Brussels right now trying to craft safe harbor regulations which would allow them to take this technology into the mainstream while marginalizing the cowboys. And consumers will probably lap it up: they don’t like being spied on, but they don’t like paying for stuff even more strongly.

Friday, June 30, 2006

A connectivity compromise

The effort to prevent US consumer broadband providers from charging anyone except end-users for improved quality of service has stalled in the Senate. Andrew Orlowski skewers the paranoia of the Neutralistas as only the Register can. He writes:

“Rather than confront the underlying, and very real problems it seeks to redress, the blogging wing of the US left has instead created an alternative cyber-reality - populated by phantom demons, imaginary conspiracies, and bogeymen. [...] The immediate consequence of the focus on "Neutrality" has been to permit the cable lobby to write the most anti-competitive bill for thirty years. Perhaps they knew the bloggers were only playing a game, and wouldn't think to look at the rest of the legislation.”

People may at last be in a mood for a compromise. Here’s one: wireline network operators may not block traffic but they can prioritize it, as long as any content provider can buy prioritized access on equal terms. The conditions can be lifted if true competition in consumer broadband materializes.

The situation

There is fear on both sides:

  • The content community fears that the network operators could use their market power to integrate vertically, lock out new entrants, and extract rents.

  • The operator community fears that anti-competition rules will have unintended consequences that suppress their profitability below sustainable levels.
  • One can address both sets of fears by recognizing that market power should be mitigated, while taking into account that competition in last-mile broadband would reduce the need for such actions.

    A solution: the Open Offer Internet

    I start with the premise that there is insufficient competition in last mile high speed broadband networks, and that this concentration is likely to suppress innovation and raise prices, thus decreasing consumer welfare. This situation justifies the imposition of "Open Offer" conditions on both telephone and cable companies that offer broadband access:


    1. No traffic blocking; all sites to be accessible to consumers

    2. The operator can enter into arrangements with 3rd parties to improve content delivery, but this offer should be available on reasonable and non-discriminatory terms (taking into account discounts etc.) to any comer.

    3. Operators shall interconnect with all other broadband networks on reasonable and non-discriminatory terms.

    This is not a perpetual mandate, and can change if the competitive situation improves. There would be a review every few years:

    • The FCC reports on compliance with the Open Offer terms. The FCC can get access to confidential company information to make this assessment, but may not make such information public.

    • Operators can request for Open Offer conditions on them to be lifted, if they can prove that the markets they operate in are all competitive.

    • The FCC can (re)impose Open Offer conditions on operators if they see anti-competitive behavior.

    Notes

    Operators may offer tiered service tiers to consumers if they wish.

    I don’t use the FCC definition of broadband; saying that anything faster than 200kbps is broadband is just silly. Today, “high speed broadband” effectively means speeds faster than 2 Mbps. This will always be a moving target, so it’s better to define it in relative terms. For example: define the threshold of high speed broadband as the lowest speed provided to the top 20% of homes.

    Thursday, June 29, 2006

    Useful self-delusion

    Susan Stamberg’s interview with Lawrence Summers [1] puts on view the kind of person that cannot conceive of personal failure. I don’t think he’s denying responsibility for his failure; he is simply unable to see it.

    This characteristic is so common among leaders that it’s probably a requirement for success. Such people inspire loyalty just because they always see the bright side of every situation. They can persuade others that they’re on the side of right because they believe themselves to be so. When something goes well, it must be because of their actions; when something goes wrong, it must be someone else’s fault.

    Their motivational ability follows from an inability to see their own flaws. In a sense one can’t blame them for not taking responsibility; personal failure is just not the truth as they see it.

    Jeff Skilling of Enron fame is another recent example. A Wall Street Journal article [2] reports that Skilling believed that if he just told the "real" story of Enron, he'd be in no danger. This led him to providing prosecutors with pieces of information that they effectively used against him at the trial. He didn’t believe then, and doesn’t believe now, that he committed any crimes, even though a Houston jury convicted him of 19 counts, including conspiracy, fraud and insider trading.

    The “little people” find it hypocritical when leaders insist that employees take responsibility for their actions, but then don’t hold themselves accountable. The beam in the CEO’s eye doesn’t prevent him from seeing the splinter in everyone else’s... But be gentle; how can it be hypocritical if the Chief Ego Officer truthfully doesn’t believe that they’ve done anything wrong?

    ---------

    [1] NPR Morning Edition, “Summers Looks Back at Harvard Presidency,” 29 June 2006

    [2] John Emshwiller, The Wall Street Journal, “In New Interview, Skilling Says He Hurt Case by Speaking Up,” 17 June 2006

    Wednesday, June 28, 2006

    Ephebophobia

    Outsourcing angst is due to insecurity: fear that a comfortable status quo is going to change for the worse.

    The most obvious fear is that of losing one’s job, sooner or later, because someone in Asia can do it more cheaply.

    Since the outsourcer is usually in another country, a fear of foreigners – xenophobia – rapidly creeps into the discussion. There’s more than a little “issue bleed” between the off-shoring and immigration debates.

    Since the conversation is driven by Baby Boomers, there’s also the fear of another Other: the young. The Boomers are now parents and proud grand-parents. They can’t admit to loathing their off-spring; it doesn’t fit the wholesome self-image. However, they are getting old, and the next generation is beginning to threaten their prerogatives.

    Energetic, optimistic, in the full bloom of youth: today’s Menace are the Asian ephebes.

    (Thanks to Nicholas Shum for help with the Greek.)

    Thursday, June 22, 2006

    The engineering brain

    Tren Griffin alerted me to an article by Debra Schiff in EETimes which floats some ideas that have a bearing on the Hard Intangibles problem [1]:
    1. Spatial abilities appear to be more localized in the brain than other skills, such as verbal ability.
    2. Spatial ability is a key trait for engineers, scientists and mathematicians.
    3. The brains of engineers have systemizing mechanisms that are set at a higher-than-average levels.
    As Lakoff has pointed out [2], organizational structure is conceptualized as physical structure, as in “The theory is full of holes,” “The fabric of this society is unraveling,” “His proposed plan is really tight; everything fits together very well.” Since systems are complex abstract structures, it’s plausible that we use spatial brain centers to think about systems, and that spatial talent would lead to systematizing ability.

    This raises the obvious question: Has MRI shown that engineers or high systematizers in general, have more extensive spatial manipulation centers in their brains? Listening to software engineers definitely suggests that spatial metaphors are central to their practice.

    I dream about putting a software developer in an fMRI while they’re writing code, and seeing the spatial centers in the brain lights up. And as an encore, seeing what happens when the system problem cannot be modeled in 3D space, eg “high dimensional” challenges like concurrent systems.

    -------

    [1] Debra Schiff, “What drives you? Pick your brain,” EETimes 6/19/2006 http://www.eetimes.com/showArticle.jhtml?articleID=189401732

    [2] Lakoff and Núñez (2000), Where Mathematics Comes From

    Thursday, June 15, 2006

    Brothers?

    The Diary of a Tired Black Man video clip reminded of the custom in the African-American community to describe people in one’s community as “brother” and “sister,” even if they aren’t relatives.

    Curiously, Afrikaners – those paragons of racism – used to do the same. The secret elite that ran Afrikaner culture was called the “Broederbond,” that is, the society of brothers. Afrikaners of my father’s generation (though not my father) would often call each other “broer” as a sign of solidarity.

    I’ll take as read the vast differences between these groups, such as the oppressed/oppressor distinction.

    There are a few similarities, though, that might have led to this common usage:

    • Both communities are deeply religious, with a strong Protestant strain
    • Both see themselves as threatened racial minorities in a sea of people of another color
    • Both have a folk memory of being displaced, of being strangers in their own land

    Monday, June 12, 2006

    The dreaded decline in American science

    I’m tired of the moaning about the supposed decline of American science and technology. There are frequent forecasts of doom, along with calls (by professors) for increased funding of education, and (by business people) for increased R&D subsidies.

    It doesn’t necessarily follow:

    It’s not at all proven that the US lags in science and technology; see e.g. data cited by Fareed Zakaria on page 2 of his MSNBC column “How Long Will America Lead the World?” According to him, the U.S. is currently ranked the second most competitive economy in the world (by the World Economic Forum), and is first in technology and innovation, first in technological readiness, first in company spending for research and technology and first in the quality of its research institutions.

    Even if it’s true that the US lags, it’s not proven that science/tech is the key factor in innovation. Innovation is creating a new product that makes a difference. Science and technology is necessary, but not sufficient; I’m not even convinced it’s the key factor. iPod is a great market innovation, but Apple wasn’t the inventor of the MP3 player or on-line music stores. Rather, the key was to design a compelling whole.

    Even if technology were the key factor in innovation, it’s not clear that science/tech is the US’s key competitive advantage going forward. Ricardo’s theory of comparative advantage in trade suggests that countries should focus on the activity where they’re “most better” at. If the US is better at business model innovation than engineering, then it should focus on business, even if its engineering is the best in the world.

    I’m reminded of the old story about the California gold rush: the diggers went home poor, but Levi Strauss made a fortune selling jeans. I suspect that having a science-educated workforce is the gold fever of the knowledge economy boom.

    Rising countries are strong in science and technology; but it doesn’t follow that science and technology is the source of their competitiveness. It is just as possible, and more likely, that it’s the “technology” of market capitalism, selectively applied.

    The talents required to succeed in this economy may well be soft, human skills, like those advocated by Dan Pink in his book “A Whole New Mind”. I have my doubts about it – not least because Dan Pink argues that the future belongs to people like Dan Pink – but it give a provocative counter-point to the STEM (Science, Technology, Engineering and Mathematics) advocates. Pink’s six essential aptitudes for the coming century are design, story, symphony, empathy, play, and meaning; very little along those lines is taught in engineering school.

    I hear echoes of the Manhattan Project and its successors in the battle cries of the technocrats. The supposed success of science in winning the second world war led to Robert McNamara & Co running the Vietnam war by the numbers, with such great success. Not to mention Donald Rumsfeld’s Technology Über Alles strategy for winning the war in Iraq….

    Of course we need people who can excel in knowledge-intensive jobs – but that’s not the same as STEM jobs.

    And of course we need a good supply of engineers. However, the problem (if any) is one of demand, not supply. If engineers were indeed so valuable to companies, then employers would increase salaries until all positions were filled to their satisfaction. A “Help Wanted” sign in a diner’s window doesn’t mean that there’s a shortage of short order cooks; it mostly means that the owner of the diner isn’t willing to pay a decent wage.

    I’ll concede that there is a problem with US education. (Though… when hasn’t there been? And which country doesn’t agonize over education?) The NAS panel recommended that more science teachers be recruited by paying sign-on bonuses (PDF exec summary). However, this is a palliative at best. The core problem is that teachers aren’t paid enough (blame the Right), and that the teachers’ unions have a stranglehold on workplace rules (blame the Left). There is a gap in teachers’ salaries, and a lack of accountability.

    The American Federation of Teachers salary survey reports that the average job offer in 2004 to college graduates who were not education majors was $40,472; that’s $8,768 more than a starting teacher’s salary. A sign-on bonus will help, but only if salaries for mid-career teachers also increase. At this point, there’s no financial incentive for good scientists and engineers to stay in teaching.

    Here’s one reason why school science scores are better in emerging countries: in those places, teaching is still a relatively well-paying job. The US problem of affluence will catch up with them in time. For example, India is struggling to find university lecturers in computer science, since they can earn so much more in the commercial sector. American science education will only improve if the society decides that teachers are as important as design engineers, and pays them accordingly; sadly, that’s not on the cards.

    Wednesday, June 07, 2006

    Math as Metaphor vs. Multiple Intelligences

    Jonathan Aronson alerted me to the relevance of Howard Gardner’s work on Multiple Intelligences to my “hard intangibles” project.

    Gardner argues that intelligence isn’t a one-dimensional capacity that can be measured by (say) an IQ test. He defines it as “the ability to solve problems, or to fashion products, that are valued in one or more cultural or community settings.” He argues that there are seven distinct intelligences: linguistic; logical-mathematical; spatial; musical; bodily-kinesthetic; interpersonal; and intrapersonal. Each person has a different mix of skills. [1]

    He applies this theory to education. In ref. [1] he gives the example of a child that’s having trouble learning mathematics because the principle to be learned (the content) exists only in the logical-mathematical world and it ought to be communicated through mathematics (the medium); however, the child struggles with math. A good teacher finds a way around this problem by translating the principle into another domain, e.g. through a story or a spatial model. Gardner observes that this alternative route to understanding “is at best a metaphor or translation. It is not mathematics itself. And at some point, the learner must translate back into the domain of mathematics.”

    This raises a question about Lakoff’s work [2] about the underlying sensory-motor metaphors in mathematics. If Gardner is correct that mathematics is a domain with its own intelligence, and if there’s a distinct basis for each intelligence in brain physiology, then his claim that mathematics is based on spatial and bodily-kinesthetic metaphors may be nothing more than a way to make math intelligible to people with good spatial and bodily-kinesthetic intelligence. Those who are proficient at mathematics use their “mathematical faculty”, and don’t have to fall back on sensory-motor metaphors.

    From my reading of their work, Lakoff makes a more persuasive case than Gardner, and I’m therefore inclined to doubt that spatial models in mathematics are simply crutches.

    Still – it’s notable that two of Gardner’s intelligences (logical-mathematical, and musical) do seem more remote from the sensory-social experiences of childhood, which Lakoff argues shapes our cognitive abilities, than the others. It suggests that one might expect another collaboration from Lakoff, on “Where Music Comes From”.

    ----------

    [1] Howard Gardner (1993), “Multiple Intelligences: The Theory in Practice”

    [2] George Lakoff and Rafael Núñez (2000), “Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being”

    Monday, May 29, 2006

    If you’re so smart, why do only politicians pay attention to you?

    Politicians can’t live without economists. Every policy needs an economic rationale, and there are platoons of theoretical economists in think tanks and universities ready to opine at a moment’s notice. National well-being is measured in money, and economists are a convenient proxy for business.

    Business people do fine without economists, thank you very much. Executives work hard to avoid the dismal science and its practitioners. Economists have directed some stellar business failures, like Long-Term Capital Management. LTCM’s board included two Nobel Prize winners in economics; it folded four years after its formation with losses of $4.6 billion.

    Why is economic advice so essential in politics, but considered pointless in business?

    First, politicians have learnt that wrapping themselves in science adds to their credibility. Conveniently, economists learned the same thing when they appropriated the mathematics of classical mechanics in the 19th Century. Economists and politicians happily conspire to stage a theater of scientific decision making about the economy.

    Day-to-day business is, in fact, far more scientific than government policy making. The scientific method of trial and error works, since business moves rapidly enough to perform many parallel experiments. One can try ideas out in the market, and/or watch competitors doing it. When government makes law, the experiment takes longer to play out, and there are usually no controlled trials. Theoretical economists are needed to fake the experiments using their models.

    Second, politics is about moral choices. Adam Smith, the founder of modern economics, was a professor of moral philosophy, and his heirs have stayed close to their roots. Economics is a curious science; when its predictions are contradicted by how people behave in practice, economists usually declare that the people are wrong, not their theory. It a theory doesn’t describe how people behave, they assert that it determines how people should behave, were they truly rational.

    Since politicians love making moral judgments, they find economists to be convivial bed-fellows.

    Third, politicians of every stripe can always find an economist to agree with them. Economics attempts to describe complex systems using simplified models where the assumptions make a huge difference. One can always find a plausible justification for any choice of model and assumptions, since the space of possibilities are so large. That model and those assumptions can then provide “economic proof” for the whackiest of policies.

    There is really only one reason for businesses to take the advice of economists seriously: when they have to deal with artifacts constructed by economists, or deal with the government. The more esoteric instruments in financial markets are constructed on the basis of neoclassical economic models. Government spectrum auctions are the two-fer: one theoretical economist designs the auction for the government, and then all his/her buddies work as consultants to the bidders to exploit any weaknesses and loopholes that their friend had missed.

    Update 5/31/06: Today's Forbes.com has a story from Chris Knauter about bidding in the FCC's upcoming spectrum auction. It's mostly an interview with game theorist Darin Lee. Excerpt:
    "Anyone bidding in the FCC auction, which will sell off more than 1,100 licenses, will need deep pockets--and the help of game theorists, who specialize in a branch of math that studies how different players act and react to each other in complex situations: If X makes a certain move, how are Y and Z likely to react?"

    Saturday, May 27, 2006

    The real tragedy of the spectrum commons

    Advocates of licensed spectrum warn darkly that unlicensed spectrum suffers from the Tragedy of the Commons – that is, the over-exploitation of a shared resource because individuals get the benefit of anti-social over-consumption, while everyone suffers the cost.

    The true tragedy of the spectrum commons arises from the collective action dilemma. When all benefit from the existence of a good, and every individual’s contribution to creating it is small, everyone will wait for someone else to do the work of production. Less of the good – unlicensed spectrum, in this case – will be produced than would be optimal.

    Many companies (not to mention the citizenry at large) would benefit from generous unlicensed spectrum allocations. However, the impact that any single entity can have in making the case is relatively small. Further, since no-one would have exclusive access to this spectrum, any successful lobbying to get such spectrum would benefit the world at large, particularly those who sat on their hands and did nothing.

    Licensed operation, in contrast, is the preserve of relatively few players. Any lobbying they do to increase the amount of flexible-use spectrum is to the advantage of a relative small group of spectrum owners.

    One would predict that there would be much less unlicensed than licensed spectrum; and that unlicensed would lose out from licensed in a lobbying fight. Jim Snider provides this data in a paper making the case for unlicensed allocations in the TV bands:

    • There is more than six times as much spectrum allocated to flexible licensed use as to unlicensed below 3 GHz (683.5 MHz vs. 109.5 MHz)
    • Reallocations of spectrum since 2002 have been biased against unlicensed: licensed gained 489.5 MHz, and unlicensed lost 20 MHz.

    A regulator like the FCC should therefore allocate more to unlicensed than the lobbying record might justify, to make up for the under-provision that’s inherent in such collective goods.

    Wednesday, May 24, 2006

    Free as in Kitten

    The Free Software community has long argued that free/open source is “free as in speech, not free as in beer.”

    The proponent, sorry, proponents, of proprietary software has, um, have tried to change the subject and talk about the Total Cost of Ownership.

    A salutary lesson in countering a catchy slogan with boilerplate PR-speak only a marketdroid could love.

    They should try this: Free Software is Free as in Kitten.

    I owe this insight to Bruce Sterling. He writes in his 2005 design book, Shaping Things (Ch. 9, p. 71):
    "A price as low as literally free can mean the economic equivalent of a free kitten -- I may get a free kitten, but then I have to deal with the consequences, with no exit strategy."
    (I'm not the first to use this phrase. Google turned up one earlier example, in an August 2002 CNet opinion piece by Sun's Simon Phipps.)

    Monday, May 22, 2006

    Prayer and mirror neurons

    Research into the impact of prayer on patients undergoing heart surgery has found no discernible benefit (Benson & Dusek et al. 2006, reported in Science & Technology News).

    This work looked at the effect on the ‘prayee’. While there were no benefits there, I think that there are likely to be demonstrable impacts on the ‘prayor’.

    Prayer that asks for something good for someone else (intercession, metta) reminds me of mirror neurons. It seems that doing something, and watching someone doing something, both activate the same brain region; at some level we don’t distinguish between doing, thinking, and watching.

    I suspect that intercessory prayer activates the “handing stuff over” center in the brain. The metaphors we use when talking about generous acts provide some support for this guess. For example: “Give her my best wishes,” “He extended his sympathy,” and “My heart went out to him.” One could test this by measuring the activation of mirror neurons in humans while praying, and comparing it with activation when giving something, or watching a gift.

    Doing good makes us feel good, and if the mirror neuron hypothesis is sound, thinking about doing good is almost as good as doing it. This suggests that intercessory prayer should activate both mirror neurons for motor activity, and brain centers for emotional well-being.