Saturday, August 30, 2008

Analog and digital religions

Are you saved?

In the Christian tradition, the answer is binary: either you are, or you aren’t. Even if you’re not sure, God has decided. You’re either going to heaven or hell, with a side trip through purgatory for some denominations.
When the Son of Man comes in his glory, and all the angels with him, he will sit on his throne in heavenly glory. All the nations will be gathered before him, and he will separate the people one from another as a shepherd separates the sheep from the goats. He will put the sheep on his right and the goats on his left. (Matthew 25:31-33)
There are other traditions where salvation is an “analog” process. The release from suffering comes about gradually through hard work.
Just as when a carpenter or carpenter's apprentice sees the marks of his fingers or thumb on the handle of his adze but does not know, “Today my adze handle wore down this much, or yesterday it wore down that much, or the day before yesterday it wore down this much,” still he knows it is worn through when it is worn through. (Samyutta Nikaya 22.101)
Our physical existence is analog: things wear down gradually, like the handle of an adze over years of use. (In case you also don’t remember what an adze is: it’s a tool used for smoothing rough-cut wood in hand woodworking.) On the other hand, technology is increasingly digital: something either works perfectly, or not at all. [*] The reception of analog TV will gradually get worse as one moves further and further away from a transmission tower, but digital TV quality falls off a cliff at a certain distance. It’s perfect, and then suddenly the screen is black.

It’s curious that we’ve taken so easily to binary, digital technologies given that we evolved in a physical reality that is analog and continuous. I suspect it’s because our minds categorize: someone is either male or female, friend or foe, sheep or a goat; the fruit is on the tree, in the basket, or on the ground. The classifying knack makes intelligible binary outcomes both in the spiritual life and modern technology though one may also have a gradualist religion, or a liking for vinyl records.

- - - -

* As always, yes, there are exceptions that prove the rule. PC performance can degrade gradually as a disk gets fragmented or an application accumulates memory leaks; glass and ceramic will fracture catastrophically.

Monday, August 25, 2008

The Soreness of Losing – Clinton Edition

A dark cloud of cranky Clintonism is hanging over the Democratic convention in Denver. Dark muttering about not supporting Obama because Clinton (either one) was disrespected just won’t go away.

There are many plausible explanations, including egotism, frustrated feminism and the Boomer/GenX divide. I rather like an appeal to the psychological phenomenon of loss aversion: people feel a loss more keenly than a gain.

Technically, loss aversion is the tendency to prefer avoiding losses over acquiring equivalent gains. I like to think of it this way: Imagine a store selling widgets. They can either sell them for $100, and offer a 5% discount for cash, or sell them at $95 but impose a $5 surcharge for someone buying with a credit card. The discount feels like a gain to the cash customer, and the surcharge feels like a loss to the credit card buyer. The net effect is the same, but the loss is felt more keenly than the gain. Therefore, stores will more likely post the credit card price and offer cash discount than impose a surcharge.

Clinton supporters went into the primary campaign assuming that they were going to win. Obama’s win is a keen loss to them; something that these people felt they already “had” is being taken away. For the Obamans, on the other hand, the win was a bonus; they never really expected it. They’re happy about it, of course, but don’t feel it as profoundly as the Clintonistas feel their loss.

There is probably little that the Obama campaign can do to assuage their pain. The best hope for the Democratic party is the operation of another cognitive bias: the tendency for people to overestimate the length or the intensity of an emotion, known as impact bias. Even though the Clintonistas may not feel that way now, by the time the November election comes around their current disaffection will have passed, and they will vote the Democratic ticket.

Sunday, August 24, 2008

Why is it hard to be good?

Getting into his stride, productivity guru David Allen asks his $595/head audience, “How many of you have fallen off the wagon?” That is: how many, after having already forked out money at least once for his Getting Things Done regimen, have relapsed into their old bad habits? Many of them sheepishly raise their hands [1].

Something similar happens every week in churches, temples, synagogues, mosques and meeting houses around the world, though it’s usually cheaper and less glitzy: people go to be reminded, again and again and again, to practice virtue, and quit their vices.

Why is it so hard to be good, and so easy to be bad? Nobody needs a motivational speaker to remind them to sin. The goal is not to be virtuous for the sake of it, of course. Virtue is necessary for salvation. But the question still stands: Why is the path to salvation the difficult one?

If the virtues were adaptive, one could be sure that evolution would have made them pleasurable. We don’t have to be reminded to eat and procreate; it’s stopping ourselves eating and coupling in “inappropriate” ways that takes effort [2]. Perhaps salvation is a goal of the mind, not the body.

Morality comes to the fore when evolution by culture starts to outstrip human evolution by nature – times when the selection of memes becomes more important than the selection of genes.

The Axial Age was such a tipping point. Around 500 BCE, the function of major religions shifted from cosmic maintenance to personal transformation [3]. This was a time when urbanization and mobility was increasing. Literacy and technology moved into the cultural mainstream. There was a decisive change in people’s sense of individuality: a growing consciousness of humans as moral agents responsible for their own actions, an increasing awareness of the experience of death, and a preoccupation with what lay beyond.

Our struggle with virtue might be the clash between what it takes to be happy in an urbanized, technological society, vs. what’s required in a pre-literate life closer to unmediated nature.

The puzzle of the dark triad is instructive. The triad is a complex of anti-social behaviors that has serious social downsides: people who are narcissistic, sociopathic or Machiavellian risk being shunned by others, leaving them vulnerable to all the risks of being a loner outside the social circle. And yet, those behaviors persist; they must be adaptive. It seems that the dark triad helps you get laid (if you’re male). Such people are also useful as wartime leaders.

The dark triad, and other immoral behavior, is sometimes adaptive. Morality could be the way that complex societies compensate for their down sides. The struggle for virtue is the price our minds pay for the benefits our genes get from immoral behavior.

----- Notes -----

[1] This is a paraphrase of reporting in “Getting Serious About Getting Things Done,” Business Week, August 14, 2008

[2] Big sins, unlike the menial ones, usually do require persuasion, as in the pep talks that sellers of shady financial products get before they hit the phone banks. And we do occasionally commit acts of kindness without having to force ourselves – thought that’s rare enough to deserve being remarked. Those good behaviors that do “come naturally”, like caring for our own children, scarcely count as virtues.

[3] John Hick, An Interpretation of Religion, 1989, pp. 22-29

Friday, August 15, 2008

Use it or lose it

I’ve been thinking about how a trademark-inspired approach might change how radios are regulated. (See my February post on De-situating Spectrum; a conference paper will be available soon).

One of the interesting aspects of trademark is you own it as long as you use it. Wireless rights in a trademark-inspired regime would work the same way.

Under the current regime, many radio licenses are subject to build-out conditions that can lead to the retraction of a license if a network isn’t constructed to use it. (In practice, these conditions don’t really have teeth.) However, unlicensed operations don’t have build-out conditions. They should – though they will differ from licensed ones, since there isn’t a license holder to go after.

The idea is that the regulator (the FCC in the US, for example) would make clear that an unlicensed allocation is “use it or lose it”: if the band is not used in the way promised by manufacturers and contemplated by the regulatory decision, the rules will lapse. The unlicensed allocation may be replaced by something else.

Here’s a straw man.

“Use it” tests: At least three devices should have obtained certification by Year Three after allocation; at least 500,000 devices/year should be sold in Year Five. If not, the allocation is deemed not be used. If device sales fall below 50,000/year for three successive years at any time after that, the allocation will also be deemed to be unused.

Define the “trigger date” for losing the allocation as the year in which the allocation fails the “use it” test.

“Lose it” consequences: At the trigger date, the FCC will cease certifying devices; this means that no new devices will be sold. However, the (few) intrepid souls who bought devices should be able to continue using them, at least for a while. Let’s assume devices have a useful life of five years; unlicensed operation will continue to be legal for that period. At that point (five years after the trigger date) the allocation will lapse, and the FCC may choose to issue new unlicensed rules, auction the spectrum, or do nothing.
Notes

The parameters proposed above were by way of example. One would have to be sure that the “use it” test doesn’t discourage innovation and investment, and that the “lose it” consequences don’t drain the bathwater too soon; also, both should be as resistant as possible to manipulation by opponents.

Rather than an all-or-nothing loss, the FCC might reduce the size of the allocation as an interim step, as it did with UPCS. This is particularly easy if the unlicensed devices are reprogrammable over the air.

A regular review of operating parameters should be built into all unlicensed rules, even if the allocation is being used by the above criteria. If the rules change, there will be a two-step phase in that parallels the “lose it” consequences. In the first five years, legacy devices will be grandfathered in, but no new certifications will be issued for legacy operation. This should allow people to extract economic value from their investment. Five years after the rule change, though, operation of legacy devices will no longer be legal.

Motivation

One of the most compelling arguments against unlicensed white spaces is that allocating them would preclude ever cleaning up the “UHF mess”: valuable spectrum is under-used because cheap receivers require very large frequency gaps between transmissions to avoid one TV broadcast from interfering with another. For only a few cents in additional receiver hardware, transmitters could be packed more closely, allowing additional systems to operate. The other part of the “mess” is that fewer than 15% of Americans get their TV over the air; most of it is piped into homes via cable. High power transmitters are running night and day – precluding other uses – for the benefit of a small number of people.

Unlicensed advocates contend that Wi-Fi like devices should be allowed to operate in the frequency gaps. These devices will be smart enough not to transmit in the same channel as TV broadcasts, and their own transmissions will be so weak that they don’t cause interference to TV stations in adjacent channels.

Opponents say that this will set the mess in concrete. If a way is ever found to persuade broadcasters to cease transmission – a big IF, given that broadcasters have immense political power, and their imminent demise has been predicted for decades – then the new owners of the ex-broadcasting channels will have to contend with unlicensed “neighbors”. Once the technical parameters are set, the secondary unlicensed users become squatters and can’t easily be evicted. (Military transmissions from time to time interfere with garage door openers, e.g. in Denver and Quantico. The military has priority, but the manufacturers just ignored this. . . the military backed off, it seems ) Rather than have a pristine post-broadcast band to work with, new licensees will have to co-exist with unlicensed devices.

I don’t have much of a problem with this, provided the unlicensed devices are actually widely deployed. The problem arises when there are just a few scattered users that are blocking a new rights allocation (as a handful of over-the-air TV viewers are doing right now). Given the FCC’s administrative proceedings, it is exceedingly hard to get allocations changed. The 20 MHz unlicensed PCS (UPCS) band was allocated by the FCC in May 1995 but never really used. It took almost ten years for the FCC to re-allocate the lower half of the band to licensed use, and tweak the rules in the upper half to allow more technologies to operate.

There is also a more general objection to unlicensed: it’s very hard to back out of a bad decision, as the UPCS experience shows. Licensees that have flexible-use rights can change their mind about technology and rebuild their systems, if necessary by negotiating variances in spectrum rights with those in adjacent parcels; Tom Hazlett gives the example of the conversion from analog to digital in the cellular bands. In contrast, unlicensed allocations once made are slow to change. Built-in sunsets such as those I propose above could fix that problem.

Saturday, August 09, 2008

Regulation tops the list of global business challenges

Ernst & Young has made a Top Ten list of challenges facing global businesses in 2008. (Thanks to Peter Haynes for bringing this to my attention.) I was rather surprised to find that “regulatory and compliance risks” came in first.

It’s telling that reg/compliance came out ahead of such sexy topics as aging consumers, radical greening, and energy shocks.

E&Y divide the threats into three categories: macro (geopolitics, macroeconomics), sectoral (industry specific), and operational. I prefer to divide it into perennials and fads. Using E&Y’s numbering:

Perennials
4. The inability to capitalize on emerging markets
5. Industry consolidation/transition
7. Execution of strategic transactions
8. Cost inflation
10. Consumer demand shifts
Fads
1. Regulatory and compliance risk
2. Global financial shocks
3. Aging consumers and workforce
6. Energy shocks
9. Radical greening
Reg/compliance might look like a perennial at first sight, but the transformation being wrought by ICT (“complexification” – see my paper Internet Governance as Forestry, PDF, 600kB) probably makes it a "fad" since it is posing significant new challenges. However, as Peter points out, reg/compliance is neither a perennial nor a fad if there are successive waves of transformation. It’s perhaps better categorized as “cyclical” – just like global financial shocks. (I wouldn't put energy shocks in the cyclical bucket. Even though there have been oil crises before, the exhaustion of reserves takes us to a qualitatively new stage every time.)

------

Report: “Strategic business risk: 2008 — the top 10 risks for global business" (PDF , 3.1 MB). The list is a result of interviews with more than 70 analysts around the world; it’s therefore firmly in the “echo chamber” category. Unfortunately the report just focuses on the Top Ten, and so one doesn’t know which risks did NOT make the list. This would’ve been useful as context, categorizations like this are so squishy that one can fit anything in somewhere.