Wednesday, December 29, 2010

Law without Categories?

A recent New Scientist story about the descent of birds from dinosaurs (James O'Donoghue, Living dinosaurs: How birds took over the world, Section 2, Was archaeopteryx really a bird?, 08 December 2010; subscription required) contained this passage:
The real question is, where do you draw the line between dinosaurs and birds? Ask different palaeontologists and you will get subtly different answers. That is because the distinction is basically arbitrary, says Xing Xu of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, who discovered many of the Chinese fossils [of feathered dinosaurs].
This is a common theme in biology: the boundaries between species are arbitrary. And yet we continue to think in terms of species, since categorization is such a strong human reflex.

Jurisprudence and regulation in particular is built on categorization, defining categories that determine the response to a particular situation. At the heart of current network neutrality argument is the question of whether  a company falls in "Title II" in which case a whole raft of  telecommunication regulation regarding common carriage applies, or "Title I" in which case they are much more lightly regulated.

However, as the analogy to biology illustrates, most interesting categories have fuzzy boundaries, making for a delightful amount of work for lawyers and lobbyists, but not necessarily helpful outcomes.

Taxonomies are backward-looking; they attempt to fossilize a reality but are constantly open to revision. (This necessity for revision undermines the certainty which category-based rules purport to offer since categories are less robust than they appear, necessitating the case-by-case interpretation which proponents of rules contend is the weakness of the alternative approach, principles-based regulation.) They evidently work well enough, though; they're pervasive. A paper by David Bach & Jonathan Sallet about VOIP regulation (The challenges of classification: Emerging VOIP regulation in Europe and the United States, First Monday, Volume 10, Number 7, 4 July 2005) explains the situation very well:
From a practical point of view, classification stands out because classifying different services is what regulators principally do. In an ideal world, one could just draw up rules for VOIP that address the aforementioned critical issues, keeping in mind the technology’s novelty and the substantial differences that exist between conventional circuit–switched telephony and innovative packet–switched VOIP. In the real world, however, a first step in the regulation of new technologies is usually to try to fit them into existing service categories, in part because those are the tools that regulators work with and in part because classification can provide shortcuts through complex regulatory problems. Alternatively, regulators may be inclined to ask whether VOIP service is "like" or "substitutable" for current services — an approach that may obscure technological achievement. Either way, much is at stake in these decisions.
Fitting VOIP into existing regulatory categories is not simply an administrative or technical act. Since categories are associated with distinct sets of rights and responsibilities that have distributional and market strategic implications, a large number of stakeholders have mobilized to affect the outcome. . . .
Unpacking the political economic dynamics of evolving VOIP regulation highlights a second, more analytic reason to focus on classification. The debate over how to classify VOIP represents the leading edge of the question whether regulatory classification is useful in a world of converging technologies. . . .
In the eyes of most regulators and industry observers, correctly categorizing VOIP provides a shortcut through regulatory uncertainty. Yet precisely this is the problem with classification. As policymakers almost reflexively ask how a new technology fits into existing categories, the underlying political and social objectives of regulation can get lost.

A behavioral alternative comes to mind: the regulations that should apply do not derive from the category into which an action falls, but from its consequences; in Bach & Sallet's terms, one needs to look to the political and social outcomes, not the inputs.

Thursday, December 09, 2010

Not even a metaphor

Said Industry Minister Eric Besson, describing an upcoming auction of radio licenses in France, "These frequencies are of very, very high quality." What? How can a frequency, merely an attribute of electromagnetic radiation, be of high quality?

I’ve been inveighing against the misuse of spectrum metaphors for some time, but it took this quote to make me realize that the figure of speech at issue is really metonymy, not metaphor.

Metonymy is referring to something not by its name, but by something that is intimately associated with it (Wikipedia). Some examples:

The designers come up with the ideas, but the suits (worn by executives) make the big bonuses.

The pen (associated with thoughts written down) is mightier than the sword (associated with military action).

Freedom of the press (associated with the journalists and what they write) is an important value.

The White House (associated with the President and his staff) stood above the fray.

He bought the best acres (associated with the land measured in acres).

Both metaphor and metonymy substitute one term for another: metaphor by some specific similarity, and metonymy by some association. In spectrum language both are at work, for example in “Guard bands leave too many frequencies (or spectrum) lying fallow.”

Metonymy: Frequencies are associated with radio licenses

Metaphor: Radio licenses are like title to property

Metonymy: Property title is associated with the land to which it relates

Metaphor: Fallow land stands for any underused asset

Saturday, December 04, 2010

Heresy as Diagnostic

Heresies, or more exactly, the arguments that lead to one perspective being labeled as orthodoxy and the other as heresy, are pulsing pointers to a religion’s sore spots, those questions of doctrine or practice that have multiple plausible but incompatible answers. Heresy seems to be a useful tool for analyzing a set of beliefs. (Any book recommendations gratefully received.)

I was drawn to the question of heresy by reading Augustine’s Confessions, and Peter Brown’s masterful biography, Augustine of Hippo (1967, 2000). For instance, comparing Augustine and Pelagius, he writes

“The two men disagreed radically on an issue that is still relevant, and where the basic lines of division have remained the same: on the nature and sources of a fully good, creative action. How could this rare thing happen? For one person, a good action could man one that fulfilled successfully certain conditions of behavior, for another, one that marked the culmination of an inner evolution. The first view, was roughly that of Pelagius; the second, that of Augustine.”

My guess is that the choice between solutions that leads to a perspective being labeled heresy is necessary for a consistent set of beliefs, but that something is lost when the choice is made. I’m reminded of Isaiah Berlin’s approach to conflicts of values, summed up thus by John Gray in an interview with Alan Saunders on the Philosopher’s Zone (Australian Radio National, 6 June 2009)

“ . . . the idea that some fundamental concepts of human values are intractable, rationally intractable, in the sense that first of all they can't be resolved without some important loss, and secondly reason is very important in thinking about these conflicts, and then being clear about what they are, what they're between and what's at stake in them. [E]qually reasonable people can come to different judgments as to what ought to be done, so certain types of conflict of value are intractable. . . . So this idea of a kind of fundamental and intractable moral scarcity if you like in human life, such that there have been and there will always be intractable, the conflicts of values, and we can resolve them more or less intelligently in particular contexts that can be more or less skillful and intelligent and reasonable settlements of these conflicts, but they can never be overcome or left behind.”

Such differences may point to a conflict between incommensurable world views. For example, in an article about “relativity deniers”, (Einstein's sceptics: Who were the relativity deniers?, New Scientist 18 November 2010, subscription required) Milena Wazeck explains,

"Einstein's opponents were seriously concerned about the future of science. They did not simply disagree with the theory of general relativity; they opposed the new foundations of physics altogether. The increasingly mathematical approach of theoretical physics collided with the then widely held view that science is essentially simple mechanics, comprehensible to every educated layperson."

I would not be at all surprised if there is at least something like this at play in the argument over climate change; opponents have been all but branded as heretics, and there is religious fervor on both sides.

Tuesday, November 30, 2010

Better Radio Rights

Demand for wireless services is growing relentlessly, but the ambiguous definition of rights and unpredictable enforcement has led to prolonged inter-service interference disputes that impede innovation and investment.

Silicon Flatirons organized a conference on this topic in DC a couple of weeks ago. The goal was to explore how radio operating rights could best be defined, assigned and enforced in order to obtain the maximum benefit from wireless operations. The event web site has links a fascinating set of position papers prepared by the panelists. There’s also a compendium that collects them all in one place (PDF).

Kaleb Sieh and I proposed (position paper, full paper on SSRN) an approach to radio operating rights based on three principles: (1) aim regulation at maximizing concurrent operation, not minimizing harmful interference; (2) delegate management of interference to operators; (3) define, assign and enforce entitlements in a way that facilitates transactions.

We argue that radio rights should be articulated using transmission permissions and reception protections, defined probabilistically (the Three Ps): transmission permissions should be based on resulting field strength over space and frequency, rather than radiated power at a transmitter; reception protections should state the maximum electromagnetic energy an operator can expect from other operations; both are specified probabilistically. This formulation of operating rights does not require a definition of harmful interference, and does not require receiver standards.

Since any initial entitlement point is unlikely to be optimal, the regulator should facilitate the adjustment of rights by: limiting the number of parties to a negotiation should be limited by minimizing the number of recipients, and enabling direct bargaining by effective delegation; recording a complete and current description of every entitlement in a public registry; stipulating the remedy (injunctions or damages) that attaches to an operating right when it is issued; the regulator refraining from rulemaking during adjudication; leaving parameter values unchanged after an entitlement has been defined, although values may be adjusted though bilateral negotiation between operators, and the regulator may add new parameters at license renewal.

Saturday, October 16, 2010

Who gets the apple? Part II: A salty problem

Here's another analogy; one that includes a nod to dispute resolution. For those who know and/or love Coasian economics, it's our old friend the pollution example, though tweaked to be radio interference in light disguise. It's also, incidentally, based on a true story I heard from someone who works for a large county's water district.

Imagine a city along a river, and a downstream farming community. Urban development results in more salt being added to the river; increased salinity can reduce crop yield. Salty water is therefore analogous to radio interference between transmitters (cities) and receivers (farms).

The harm to crops is a shared responsibility, though. For example, the city can reduce the amount of downstream salt by building a water treatment plant, and the farmers can accomodate more salty water by changing crops - spinach will be fine on water that's too salty for celery.

Let's imagine that a Federal Crops Commission (call it the FCC2) is responsible for managing this problem. It might instruct the city and farms to "coordinate" to find a solution to the problem, with a guideline that water may not be "too salty". As in the apple example, this is difficult to do without defining what counts as too salty, and who bears the responsibility for salinity.

If the FCC2 limits the salt the city can dump in the river like the FCC controls radio emissions, it would specify a ceiling of, say, 5 tons of salt per day - with a rider that the resulting water can't be "too salty". This is not very helpful to the farmers, however, since they care about the resulting salinity; seasonal variations in water volume or the salinity entering the city limits from upstream affect the resulting salinty. It doesn't help the city either, since it can't be sure how much water treatment capacity to build; 4 tons/day of salt might still turn out to be too much if the farmers downstream choose salt-intolerant crops and/or the river level is too low.

Matters are compounded when the city and the farming community fail to reach agreement, and go to the FCC2 to resolve a conflict. (They have nowhere else to go, since the courts defer to the FCC2 as an expert agency to decide what "too salty" means in a particular case.)

Neither side can predict what the outcome of the FCC2's deliberations will be, since it doesn't always decide the merits of individual cases in isolation. It has many proceedings before it at any given time; for example, the FCC2 might be pushing the farmers to get organic certification, and negotiating with the city about the rezoning of agricultural land for urban development. The solution the FCC2 negotiates between the city and the farmers might encompass all these other matters, not only making the result of the salinity dispute unpredictable, but failing to establish a precedent that others might use later.

A better approach would be for the FCC2 to regulate the resulting salinity in water leaving the city (to, say, 5 ppm), remove any mention of "too salty" from its regulations, and provide a way for contending parties to get a specific case resolved efficiently. It might give the farmers the right to stop the city water plant releasing water into the river if the salinity exceeds 5 ppm (leading to a negotiated solution, where the city might pay the farmers' coop $300,000 to raise the limit up to 10 ppm in dry months), or if there are too farmers to negotiate with individually it might choose a liability regime (leading to a court-imposed payment of say $30/acre if salinity exceeds 5 ppm and some farmers sue the city).

Tuesday, October 12, 2010

Who gets the apple?

I’ve been looking for a metaphor to illustrate the weaknesses I see in the FCC’s “you two just go off and coordinate” approach to solving wireless interference problems among operators.

Let's think of the responsibility to bear the cost of harmful interference as an apple.*  It’s as if the FCC says to Alice and Bob, “I've got an apple, and it belongs to one of you. I’m not going to decide which of you should have the apple; you decide among yourselves.”

Now, if Alice were the owner of the apple and valued it at 80 cents, then the answer would simply depend on how much Bob valued having the apple (and rational negotiation, of course). If having an apple was worth 90 cents to him, he’d get it for some price between 80 and 90 cents; if it was worth only 60 cents to him, Alice would keep it. Problem solved.

Trouble is, the FCC doesn’t tell them who actually owns the apple, and even if it did, it doesn’t tell them whether it’s a Granny Smith or a Gala. The odds of Alice and Bob coming to an agreement without going back to the FCC is slim.

The analogy: The FCC’s rules often don’t make clear who’s responsible, in the end, for solving a mutual interference problem (i.e. who owns the apple); and it’s impossible to know short of a rule making by the FCC what amounts to harm (i.e. what kind of apple it is).

-----------------------
* There's always interference between two nearby radio operators (near in geography or frequency).  While the blame is usually laid on the transmitter operator, it can just as reasonably be placed on the receiver operator for not buying better equipment that could reject the interference.

Friday, July 02, 2010

Social network visualizations - an online symposium

My work on the evolution of FCC lobbying coalitions has been accepted in the JoSS (Journal of Social Structure) Visualization Symposium 2010 (link to my entry). Jim Moody of Duke has done a wonderful job collecting a dozen visualizations of social networks. Each is worth exploring; in particular, see the thoughtful comments that the JoSS staff provided to each entry in order to stimulate debate.

Monday, June 07, 2010

How I Learned to Stop Worrying and Love Interference


(With apologies to Stanley Kubrick.)

Radio policy is fixated on reducing or preventing harmful interference. Interference is seen as A Bad Thing, a sign of failure. This is a glass-half-empty view. While it is certainly a warning sign when a service that used to work suddenly fails, rules that try to prevent interference at all costs lead to over-conservative allocations that under-estimate the amount of coexistence that is possible between radio systems.

The primary goal should not be to minimize interference, but to maximize concurrent operation of multiple radio systems.

Minimizing interference and maximizing coexistence (i.e. concurrent operation) are two ends of the same rope. Imagine metering vehicles at a freeway on-ramp: if you allow just one vehicle at a time onto a section of freeway, people won’t have to worry about looking out for other drivers, but very few cars would be able to move around at one time. Conversely, allowing everybody to enter at will during rush hour will lead to gridlock. Fixating on the prevention of interference is like preventing all possible traffic problems by only allowing a few cars onto the freeway during rush hour.

Interference is nature’s way of saying that you’re not being wasteful. When there is no interference, even though there is a lot of demand, it’s time to start worrying. Rather than minimizing interference with the second-order requirement of maximizing concurrent operation, regulation should strive to maximize coexistence while providing ways for operators to allocate the burden of minimizing interference when it is harmful.

I am developing a proposal that outlines a way of doing this. Here are some of the salient points that are emerging as I draft my ISART paper:

The first principle is delegation. The political process is designed to respond carefully and deliberatively to change, and is necessarily slower than markets and technologies. Therefore, regulators should define radio operating rights in such a way that management of coexistence (or equivalently, interference) is delegated to operators. Disputes about interference are unavoidable and, in fact, a sign of productively pushing the envelope. Resolving them shouldn’t be the regulator’s function, though; parties should be given the means to resolve disputes among themselves by a clear allocation of operating rights. This works today for conflicts between operators running similar systems; most conflicts between cellular operators, say, are resolved bilaterally. It’s much harder when dissimilar operations come into conflict (see e.g. my report (PDF) on the Silicon Flatirons September 2009 summit on defining inter-channel operating rules); to solve that, we need better rights definitions.

The second principle is to think holistically in terms of transmission, reception and propagation; this is a shift away from today’s rules which simply define transmitter parameters. I think of this as the “Three P's”: probabilistic permissions and protections.

Since the radio propagation environment changes constantly, regulators and operators have to accept that operating parameters will be probabilistic; there is no certainty. The determinism of today’s rules that specify absolute transmit power is illusory; coexistence and interference only occur once the signal has propagated away from the transmitter, and most propagation mechanisms vary with time. Even though US radio regulators seem resistant to statistical approaches, some of the oldest radio rules are built on probability: the “protection contours” around television stations are defined in terms of (say) a signal level sufficiently strong to provide such a good picture at least 50% of the time, at the best 50% of receiving locations. [1]

Transmission permissions of licensee A should be defined in such a way that licensee B who wants to operate concurrently (e.g. on nearby frequencies, or close physical proximity) can determine the environment in which its receivers will have to operate. There are various ways to do this, e.g. the Australian “space-centric” approach [2] and Ofcom’s Spectrum Usage Rights [3]. These approaches implicitly or explicitly define the field strength resulting from A’s operation at all locations where receivers might be found, giving operator B the information it needs to design its system.

Receiver protections are declared explicitly during rule making, but defined indirectly in the assigned rights. When a new allocation is made, the regulator explicitly declares the field strength ceilings at receivers that it intends to result from transmissions. In aggregate, these amount to indirectly defined receiver protections. Operators of receivers are given some assurance that no future transmission permissions should exceed these limits. (Such an approach could have prevented the AWS-3 argument.) However, receivers are not directly protected, as might be the case if they are given a guaranteed “interference temperature”, nor is there a need to regulate receiver standards.

While this approach has been outlines in terms of licensed operation, it also applies to unlicensed. Individual devices are given permissions to transmit that are designed by regulator to achieve the desired aggregate permissions that would otherwise be imposed on a licensee. Comparisons of results in the field with these aggregate permissions will be used as a tripwire for changing the device rules. If it turns out that the transmission permissions are more conservative than required to achieve the needed receiver protections, they can be relaxed. Conversely, if the aggregate transmitted energy exceeds the probabilistic limits, e.g. because more devices are shipped than expected or they’re used more intensively, device permissions can be restricted going forward. This is an incentive for collective action by manufacturers to implement “politeness protocols” without regulator having to specify them.

Notes

[1] O’Connor, Robert A (1968) Understanding Television’s Grade A and Grade B Service Contours, IEEE Transactions on Broadcasting, Vol. 47, No. 3, September 2001, p. 309, http://dx.doi.org/10.1109/11.969381

[2] Whittaker, Michael (2002) Shortcut to harmonization with Australian spectrum licensing, IEEE Communications Magazine, Vol. 40, No. 1. (Jan 2002), pp. 148-155, http://dx.doi.org/10.1109/35.978062

[3] Ofcom (2007) Spectrum Usage Rights: A statement on controlling interference using Spectrum Usage Rights, 14 December 2007, http://www.ofcom.org.uk/consult/condocs/surfurtherinfo/statement/statement.pdf

Monday, May 31, 2010

Why I give service

I have just returned from working in the kitchen during a course at the Northwest Vipassana Center. During one of the breaks I had a fascinating conversation with one of the center managers, who it turns out experiences service very differently from me. She asked that I record my thoughts, and this is what I came up with.

Serving is no fun – for me, at least. Serving a course is about stress, anxiety and fatigue, with a few happy moments when I wish the meditators well as I pass them by. There’s no joy in doing the work, as there is for some, and no joyful release at the end; only relief that it’s over. It’s pretty much like sitting a course, with the difference that I’m just banging my head against a wooden wall, not a brick one.

So why do I do it?

I do it because I think it’s good for me. Working in the kitchen amplifies my weaknesses, and makes it easier to see when and where I’m being unskillful. I come face-to-face with my frailties and failings, and hopefully end the course with another sliver of wisdom.

I do it because serving is a middle ground between sitting practice and living in the real world. Like developing any skill - think about playing a musical instrument - meditation requires hours of solitary practice every day, over decades. However, that practice is only the means to an end, which is to live better with, and for, others. Serving on a course helps me try out the skills I’m learning in a realistically stressful but safe environment. Things can’t spin too far out of control; I’m back on the cushion every few hours, with an opportunity to reboot and start again. And I’m surrounded by people of good will, with direct access to teachers if I need it.

And I do it to repay, in small part, the debt I owe to all those people whose service have made it possible for me to learn this technique, and sit courses. I was able to sit because someone else was in the kitchen; now it’s my turn.

Wednesday, May 12, 2010

Improving FCC filing metadata

On 10 May 2010 I filed a comment on two FCC proceedings (10-43 and 10-44, if you must know) concerning ways to improve the way it does business. I argued that transparency and rule-making efficiency could be improved by improving the metadata on documents submitted to the Electronic Comments Filing System (ECFS).

I recommended that the FCC:
  • Associate a unique identifier with each filer
  • Require that the names of all petitioners are provided when submitting ECFS metadata
  • Improve RSS feed and search functionality
  • Require the posting of digital audio recordings of ex parte meetings
  • Provide a machine interface for both ECFS search and submission

Opt-in for Memory

The Boucher-Stearns privacy measure tries to do many things (press release; May 3 staff discussion draft); too many, according to Daniel Castro at ITIF.
 
One of the issues it doesn’t tackle – and legislation may or may not be the solution – is the persistence of digital information once it has been collected.

In a NY Times context piece called Tell-All Generation Learns to Keep Things Offline, Laura Holson writes that members of the “tell-all generation” are becoming more picky about what they disclose. There’s growing mistrust of social networking sites, and young people keep a closer eye on their privacy settings than oldsters. Holson reports on a Yale junior who says he has learned not to trust any social network to keep his information private, since “If I go back and look, there are things four years ago I would not say today.”

I expect that this concern will grow beyond information collection to encompass retention. (That's already a big concern of law enforcement, of course.) Explicit posts (photos, status updates) will live forever, if for no other reason than sites like the Internet Archive. However, the linkages that people make between themselves and their friends, or themselves and items on the web, are less explicit – and probably more telling. These links are held by the social network services, and I expect that there will be growing pressure on them to forget these links after some time. Finally, there are the inferences that companies make from these links and other user behavior; their ownership is more ambiguous, since they’re the result of a third party’s observations, not the subject’s actions.

My bet is that norms will emerge (by market pressure and/or regulation) that force companies to forget what they know about us. For the three categories I noted above, it might work something like this:
  1. Posts: Retained permanently by default. Explicit user action (i.e. an opt-out) required for it to be deleted
  2. Linkages: Deleted automatically after a period, say five years. User has to elect to have information be retained (opt-in).
  3. Inferences: Deleted after a period, say five years, if user opts out; otherwise kept. This one is tricky; I can also see good reasons to make deletion automatic with an opt-in for retention.

However these practices evolve, it’s become clear to me that neither the traditional “notice and choice” regime nor the emerging “approve use” approach are sufficient without a mechanism for forgetting.

Tuesday, May 11, 2010

New Ethics as a Second Language

In lecture 27 of the Teaching Company course on Understanding the Brain, Jeanette Norden observes that we seem to learn morality using the same mechanisms we use for learning language.

Newborns can form all the sounds used in all the languages on the planet, but with exposure to their mother tongue they become fluent in a subset. It eventually becomes almost impossible to form some of unused sounds, and the idiosyncrasies of their language seem natural and universal.

This makes me wonder about the difficulties an immigrant might have in learning the peculiarities of a new culture. I’ve definitely been confounded from time to time by unexpected variations in “the right thing to do” – and there’s really very little difference between the culture I grew up in and the ones I moved to as an adult. “Culture shock” may not just be language and customs; it probably involves morality, too, since every system of ethics is a mixture of universals and particulars.

Of course, that’s not to say that one cannot become fluent in an alternative morality. It might just be harder than a native “moralizer”, particularly one who has never had to learn "ethics as a second language”, might assume.

And traditionalists around the world who claim that wall-to-wall American media “corrupt the morals of our youth” are probably right: I'd guess young people pick up the ethical biases of American culture by watching movies and TV even more easily than they pick up English.

Monday, May 10, 2010

Negotiating the Price of Privacy

Kurt Opsahl at EFF’s time line of changes to Facebook’s privacy policies over the last 5 years tells me a story of a shifting power balance. (Thanks to Peter Cullen for the link.)

It’s a quick read, but in a nutshell: in 2005, the user controlled where information went. By December 2009, Facebook considered some information to be publicly available to everyone, and exempt from privacy settings.

I vaguely remember Esther Dyson describing privacy more than two decades ago as a good users would trade. That’s how I read the time line. It’s an implicit negotiation between Facebook and its users over the value of personal information (let’s call it Privacy, for short) vs. the value of the service Facebook provides (call it Service).

In the early days, the service had few users, and the network effect hadn’t kicked in. Facebook needed users more than they needed Facebook, and so Facebook had to respect privacy – it was worth more to users than the Facebook service was:
Service << Privacy
Since the value of a social network grows exponentially as the number of members increases, the value of the service S grew rapidly as membership increased. A user’s perception of the value of privacy didn’t change much; it probably grew a little, but not exponentially. Probably sometime around 2008, the value of the service started overtaking the value of privacy:
Service ≈ Privacy
Facebook’s hard-nosed approach to privacy (or lack of it) makes clear that it now has the upper hand in the negotiation. An individual user needs Facebook more than vice versa:
Service > Privacy
One take-away from this story is that the privacy settings users will accept are not a general social norm, but the result of an implicit negotiation between the customer and supplier. When a supplier becomes indispensable, it can raise its prices, in explicitly ($$) or implicitly (e.g. privacy conditions). Other services therefore should not assume that they can get away with Facebook’s approach. They can make virtue of necessity by offering better privacy protection – at least until the day when their service is so valuable that they, too, can change the terms.

Thursday, April 29, 2010

Non-privacy goes non-linear

I’ve never been able to “get” Privacy as a policy issue. Sure, I can see that there are plausible nightmare scenarios, but most people just don’t seem to care. What a company, or a government, knows about one just doesn’t rate as something to worry about. Perhaps the only angle that might get the pulse racing is identity theft; losing money matters. But no identity theft stories have inflamed the public’s imagination, or mine.

The recent spate of stories about privacy on social networking sites have led me to reconsider – a little. I still don’t think Joe Public cares, but the technical and policy questions of networked privacy intrigue me more than the flow of personal information from a citizen to an organization and its friends.
The trigger for the current round of privacy worries was the launch of Google Buzz. Good Morning Silicon Valley puts it in context with Google, Buzz and the Silicon Tower, and danah boyd’s keynote at SXSW 2010 reviews the lessons and implications.

Mathew Ingram’s post Your Mom’s Guide to Those Facebook Changes, and How to Block Them alerted me to the implications of Facebook’s “Instant Personalization” features.

Woody Leonhard’s article Hotmail's social networking busts your privacy showed that Google and Facebook aren’t the only ones who can scare users about what personal information is being broadcast about them.
I think there may be a profound mismatch between the technical architectures of social networking sites, and the mental model of users.

This mismatch is an example of the “hard intangibles” problem that I wrestled with inconclusively a few years ago: our minds can’t effectively process the complexity of the systems we’re confronted with.

Two examples: attenuation and scale.
We assume that information about us flows more sluggishly there further it goes. My friends know me quite well, their friends might know me a little, and the friends-of-friends-of-friends are effectively ignorant. In a data network, though, perfect fidelity is maintained no matter how many times information is copied. We therefore have poor intuition about the fidelity with which information can flow away from us across social networks.

 It’s a truism that the mind cannot grasp non-linear growth; we’re always surprised by the explosion of compound interest, for example. On a social network, the number of people who are friends-of-friends-of-…-of-friends grows exponentially; but I would bet that most people think it grows only linearly, or perhaps even stays constant. Thus, we grossly underestimate the number of people to whom our activities visible.
My most recent personal experience was when I noticed a new (?) feature on Facebook two days ago. One of my friends had commented on the status update of one of their friends, who is not in my friend network. Not only did I see the friend-of-my-friend’s update; I saw all the comments that their friends (all strangers to me) had made. I’m pretty sure the friends-of-my-friend’s-friend had no idea that some stranger at three removes would be reading their comments.

If you find the “friends-of-my-friend’s-friend” construct hard to parse, then good: I made it on purpose. I suspect that such relationships are related to the “relational complexity” metrics defined by Graeme Halford and colleagues; Halford suggests that our brains max out at around four concurrent relationships.

I’m pretty confident that the Big Name Players all just want to do right by their users; the trouble is that the social networks they’re building for us are (of necessity?) more complicated than we can handle. It hit home when I tried to grok the short blog post Managing your contacts with Windows Live People. I think I figured it out, but (a) I’m not sure I did, and (b) I'd rather not have had to.

Saturday, April 24, 2010

Knowing with the Body

The neurological patient known as Emily cannot recognize the faces of her loved ones, or even herself in a mirror. [1] She doesn’t have conscious awareness that she knows these people, but her body does. When she is shown a series of photos of known and unknown people, she cannot tell them apart; however, the electrical conductance of her skin increases measurably when she’s looking at the face of someone she knows. [2] It’s not that she’s lost the ability to recognize people in general; she can still recognize her family, and herself, by their voices.

Damasio notes that skin-conductance responses are not noticed by the patient. [3] However, perhaps someone who is well-practiced in observing body sensations – for example, a very experienced meditator in the Burmese vipassana tradition [4] – would be able to discern such changes. I suspect so; in which case, a patient like Emily would be able to work around their recognition problem by noting when their skin sensations change. It’s known that they use workarounds; Emily, for example, “sits for hours observing people’s gaits and tries to guess who they are, often successfully.” [5]

Both Damasio’s theory of consciousness and Burmese vipassana place great importance on the interactions between body sensations and the mind. As I understand Damasio, he proposes that consciousness works like this:

1. The brain creates representations (he calls them maps or images) of things in the world (e.g. a face), and of the body itself (e.g. skin conductance, position of limbs, state of the viscera, activity of the muscles, hormone levels).

2. In response to these representations, the brain changes the state of the body. For example, when it sees a certain face, it might change skin conductance; when it discerns a snake it might secrete adrenaline to prepare for flight, tighten muscle tone, etc. Damasio calls these responses “emotions”, which he defines as “a patterned collection of chemical and neural responses that is produced by the brain when it detects the presence of an emotionally competent stimulus — an object or situation, for example.” [6]

3. In a sufficiently capable brain (which is probably most of them) there is a higher order representation that correlates changes in the body with the object that triggered these changes. This second-order map is a feeling, in Damasio’s terminology: “Feelings are the mental representation of the physiological changes that characterize emotions.” [6], [7] Feelings generate (or constitute – I’m not sure which…) what he calls the “core self” or “core consciousness”.
Since I find pictures to be helpful, I’ve created a short slide animation on SlideShare that shows my understanding of this process; click on this thumbnail to go there:



In Emily’s case, steps 1 and 2 function perfectly well, but the correlation between a face and changes in the body fails in step 3. The higher-order correlation still works for voices and body changes, however, since she can recognize people by their speech.

More acute awareness of body sensation might not just help clinical patients like Emily. In the famous Iowa Gambling Task, Damasio and Antoine Bechara showed that showed that test subjects were responding physiologically (again, changes in skin conductance) to risky situations long before they were consciously aware of them. A 2009 blog post by Jonah Lehrer includes a good summary of the Iowa Gambling Task and its results. Lehrer reports new research indicating that people who are more sensitive to “fleshy emotions” are better at learning from positive and negative experiences.

NOTES

[1] This post is based on material in Antonio Damasio’s book The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Harcourt 1999). Emily’s case is described on p. 162 ff. I also blogged about this topic in 2005 after reading “Feeling” for the first time.

[2] Damasio op. cit. [1], p. 300

[3] Ibid.

[4] For example, vipassana as taught by S N Goenka. Other mindfulness meditation traditions also attend to body sensations (see e.g. Phillip Moffitt, “Awakening in the Body”, Shambhala Sun, September 2007) but the Burmese tradition places particular emphasis on it.

[5] Damasio op. cit. [1], p. 163

[6] Damasio, A. (2001) "Fundamental feelings", Nature 413 (6858), 781. doi:10.1038/35101669. Note that Damasio’s definition of “emotion” is narrower than usual usage, which refers to affective states of consciousness like joy, sorrow, fear, or hate. Damasio limits himself to the physiological changes which are more typically considered to be an accompaniment to, or component of, these mental agitations.

[7] Feelings so defined seem to correspond to what S. N. Goenka, a well-known teacher in the Burmese vipassana tradition, calls sensations, his preferred translation of the Pāli term vedanā, also often translated as “feelings”. There is some debate about whether vedanā refers just to sensations-in-the body, as Goenka contends, or to any and all pleasant, painful or neutral feelings such as joy, sorrow, etc.

Friday, April 16, 2010

Bill delegates caller ID regulations to FCC

SiliconValley.com reports that the US House has approved a measure that would outlaw deceptive Caller ID spoofing.

Since I'm currently enamored of a principles-based approach to regulating rapidly changing technology businesses -- that is, policy makers should specify the goals to be achieved, and delegate the means to agents nearer the action -- I'm on the look-out for working examples.

This seems to be one: the bill leaves it up to the FCC to figure out the details of regulation and enforcement.

The FCC itself could delegate further if is so chose, for example by waiting to see if telephone companies come up with effective ways of regulating this problem themselves before trying to devising and imposing its own detailed rules.

Friday, March 26, 2010

Trying to explain the Resilience Principles

I was honored to participate in a panel in DC on "An FCC for the Internet Age: Reform and Standard-Setting" organized by Silicon Flatirons, ITIF and Public Knowledge on March 5th, 2010.  My introductory comments tried to summary the "resilience principles" in five minutes: the video is available on the Public Knowledge event page, starting at time code 02:04:45.  The panel starts at around 01:57:00.

The earlier, fifteen minute pitch I gave on a panel on "The Governance Challenges of Cooperation in the Internet Ecosystem" at the Silicon Flatirons annual conference in Boulder on February 1st, 2010 can be found here at time code 01:36:00. My slides are up on Slideshare.net, and a paper is in preparation for JTHTL.

This work is an outgrowth of my TPRC 2008 paper “Internet Governance as Forestry” (SSRN).

Saturday, March 06, 2010

Obviating mandatory receiver standards

Two remarks I heard at a meeting of a DC spectrum advisory committee helped me understand that endless debates about radio receiver standards are the result of old fashioned wireless rights definitions. The new generation of rights definitions could render the entire receiver standards topic moot.

First, a mobile phone executive explained to me that his company was forced to develop and install filters in the receiver cabinets used by broadcasters for electronic newsgathering because it had a “statutory obligation to protect” these services, even though they operated in different frequency ranges.

Second, during the meeting the hoary topic of receiver standards was raised again; it’s long-rehearsed problem that shows no sign of being solved. It’s a perennial topic because wireless interference depends as much on the quality of the receiver as the characteristics of the transmitted signal. A transmission that would be ignored by a well-designed receiver could cause severe degradation in a poor (read: cheap) receiver. Transmitters are thus at the mercy of the worst receiver they need to protect.

A statutory obligation to protect effectively gives the protectee a blank check; for example, the protectee can change to a lousy receiver, and force the transmitting licensee to pay for changes (in either their transmitters or the protectee’s receivers) to prevent interference. This is an open-ended transfer of costs from the protectee to the protector.

The protectors thus dream of limiting their downside by having the regulator impose receiver standards on the protectee. If the receiver’s performance can be no worse than some lower limit, there is a limit on the degree of protection the transmitter has to provide.

The problem with mandatory receiver standards is that it gets the regulator into the game of specifying equipment. This is a bad idea, since any choice of parameters (let alone parameter values) enshrines a set of assumptions about receiver design, locks in specific solutions, and obviates innovation that might solve the problem in new ways. Manufacturers have always successfully blocked the introduction of mandatory standards on the basis that they constrain innovation and commercial choice.

An open-ended statutory obligation to protect therefore necessarily leads to futile calls for receiver standards.

One could moot receiver standards by changing how wireless rights are defined. Rather than bearing an open-ended obligation to protect, the transmitter should have an obligation to operate within specific limits on energy delivered into frequencies other than their own. These transmission limits could be chosen to ensure that adjacent receivers are no worse off than they were under an “open-ended obligation to protect” regime. (The “victim” licensee will, though, lose the option value of being able to change their system specification at will.)

The main benefit is certainty: the recipient of a license will know at the time of issue what kind of protection they’ll have to provide. The cellular company mentioned above didn’t find out until after the auction how much work they would have to do to protect broadcasters since nobody (including the FCC) understood how lousy the broadcasters’ receivers were.

The regulatory mechanisms for doing this are well known, and have been implemented; they include the “space-centric” licensing approach used in Australia (PDF), and Spectrum Usage Rights (SURs) in the UK.

Moving to new rights regimes is a challenging; Ofcom’s progress has been slow. One of the main difficulties is that licensees for new allocations prefer to do things the old, known, way. One of the supposed drawbacks of SURs is that the benefits of certainty seem to accrue a licensee’s neighbor, rather than the new licensee themselves. However, removing the unlimited downside in an open-ended obligation to protect adjacent operations should prove attractive. The whining will now come from the neighbors who will lose their blank check; careful definition of the licensee’s cross-channel interference limits to maintain the status quo should take the sting out of the transition.

Friday, February 26, 2010

Engineers, Commissars and Regulators: Layered self-regulation of network neutrality

My post Ostrom and Network Neutrality suggested that a nested set of self- or co- regulatory enterprises (Ostrom 1990:90) could be useful when designing regulatory approaches to network neutrality, but I didn’t give any concrete suggestions. Here’s a first step: create separate arenas for discussing engineering vs. business.

One’s immediate instinct when devising a shared regulatory regime (see the list of examples at the end) might be to involve all the key players; at least, that’s what I pointed to in When Gorillas Make Nice. However, I suspect that successful self-regulatory initiatives have to start with a relatively narrow membership and scope: typically, a single industry, rather than a whole value chain. That’s the only way to have a decent shot at creating and enforcing basic norms. Legitimacy will require broadening the list of stakeholder, but too many cooks at the beginning will lead to kitchen gridlock.

Let’s stipulate for now that the key problem is defining what “acceptable network management practices” amount to. Most participants in the network neutrality debate agree that ISPs should be able to manage their networks for security and efficiency, even if there is disagreement about whether specific practices are just good housekeeping or evil rent-seeking.

The engineering culture and operating constraints of different networks are quite distinct: phone companies vs. cable guys; more or less symmetrical last mile pipes; terminating fiber in the home vs. at cabinet; and not least, available capacity in wireline vs. wireless networks. Reconciling these differences and creating common best practices within the network access industry will be hard; that’s the lowest layer of self-regulation. The “Engineers” should be tasked with determining the basic mechanisms of service provision, monitoring compliance with norms, and enforcing penalties against members who break the rules.

The core participants are the telcos (e.g. Verizon, AT&T) and cable companies (e.g. Comcast, Time Warner Cable), in both their wireline and wireless incarnations. Only within a circumscribed group like this is there is any hope of detailed agreement about best practices, let alone the monitoring and enforcement that is essential for a well-functioning self-regulatory organization. Many important network parameters are considered secret sauce; while engineers inside the industry circle can probably devise ways monitor each other’s compliance without giving the MBAs fits, there’s no chance that they’ll be allowed to let Google or Disney look inside their network operating centers.

The next layer of the onion adds the companies who use these networks to deliver their products: web service providers like Google, and content creators like Disney. Let’s call this group the “Commissars”. This is where questions of political economy are addressed. The Commissars shape the framework within which the network engineers decide technical best practices. It’s the business negotiation group, the place where everybody fights over dividing up the rents; it needs to find political solutions that reconcile the very different interests at stake:

  1. The ISPs want to prevent regulation, and be able to monetize their infrastructure by putting their hand in Google’s wallet, and squeezing content creators.
  2. Google wants to keep their wallet firmly shut, and funnel small content creators’ surplus to Mountain View, not the ISPs.
  3. Large content creators want to get everybody else to protect their IPR for them.
  4. New content aggregators (e.g. Miro) want a shot at competing in the video business with the network facility owners.
This is not an engineering argument, and a Technical Advisory Group (TAG) along the lines described by Verizon and Google (FCC filing) would not be a suitable vehicle for addressing such questions. The Commissars are responsible for answering questions of collective choice regarding the trade-offs in network management rules, and adjudicating disputes that cannot be resolved by the Engineers among themselves.

The Engineers can work in parallel to the Commissars, and don’t need to wait for the political economists to fight out questions about rents; in any case, it will be helpful for the Commissars to have concrete network management proposals to argue about. There will be a loop, with the conclusions of one group influencing the other. The Commissars inform the Engineers about the constraints on what would constitute acceptable network management, and the Engineers inform the Commissars about what is practical.

Finally, government actors – call them the “Regulators” – set the rules of the game and provide a backstop if the Engineers and Commissars fail to come up with a socially acceptable solution, or fail to discipline bad behavior. Since the internet and the web are critical infrastructure, governments speaking for citizens are entitled to frame the overall goals that these industries should serve, even though they are not well qualified to define the means for achieving them. Final adjudication of unresolved disputes rests with the Regulators.

References

Ofcom, Initial assessments of when to adopt self- or co-regulation, December 10, 2008,
http://www.ofcom.org.uk/consult/condocs/coregulation/condoc.pdf

Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action, Cambridge University Press, 1990

Philip J. Weiser, Exploring Self Regulatory Strategies for Network Management: A Flatirons Summit on Information Policy, August 25, 2008,
http://www.silicon-flatirons.org/documents/publications/summits/WeiserNetworkManagement.pdf

Examples of self- and co-regulatory bodies

The Internet Watch Foundation (IWF) in the UK works to standardize procedures for the reporting and taking-down of abusive images of children. It was established in 1996 by the internet industry to allow the public and IT professionals to report criminal online content in a secure and confidential way. (Ofcom 2008:9, and IWF)

The UK “Classification Framework” for content on mobile phones is provided by the Independent Mobile Classification Body (IMCB) with the aim of restricting young people’s access to inappropriate content. It is the responsibility of content providers to self-classify their own content as “18” where appropriate; access to such content will be restricted by the mobile operators until customers have verified their age as 18 or over with their operator. (Ofcom 2008:9, and IMCB)

The Dutch organization NICAM (Nederlands Instituut voor de Classificatie van Audiovisuele Media) administers a scheme for audiovisual media classification. It includes representatives of representatives of public and commercial broadcasters, film distributors and cinema operators, distributors, videotheques and retailers. (Ofcom 2008:9, and NICAM)

Amateur radio service and frequency coordinators provide examples of self-regulation in spectrum policy. The American Radio Relay League (ARRL) has an understanding with the FCC that it manages the relevant enforcement activities related to the use of ham radio. Only in the most egregious cases will ARRL report misbehavior to the FCC Enforcement Bureau. (Weiser 2008:23)

The Better Business Bureau’s National Advertising Division (NAD) enforces US rules governing false advertising, using threats of referrals to the FTC to encourage compliance with its rules. (Weiser 2008:24, and NAD)

US movie ratings are provided by a voluntary system operated by the MPAA and the National Association of Theater Owners.

Friday, February 12, 2010

Ostrom and Network Neutrality

My previous post scratched the surface of a self-regulatory solution to network neutrality concerns. While this isn’t exactly a common pool resource (CPR) problem, I find Elinor Ostrom’s eight principles for managing CPRs are helpful here (Governing the Commons: The evolution of institutions for collective action, 1990).

Jonathan Sallet boils them down to norms, monitoring and enforcement, and that’s a good aide memoire. It’s useful, though, to look at all of them (Ostrom 1990:90, Table 3.1):
1. Clearly defined boundaries: Individuals of households who have rights to withdraw resource units from the CPR must be clearly defined, as must the boundaries of the CPR itself.

2. Congruence between appropriation and provision rules and local conditions: Appropriation rules restricting time, place, technology, and/or quantity of resource units are related to local conditions and to provision rules requiring labor, material, and/or money.

3. Collective-choice arrangements: Most individuals affected by the operational rules can participate in modifying the operational rules.

4. Monitoring: Monitors, who actively audit CPR conditions and appropriator behavior, are accountable to the appropriators or are the appropriators.

5. Graduated sanctions: Appropriators who violate operational rules are likely to be assessed graduated sanctions (depending on the seriousness and context of the offense) by other appropriators, by officials accountable to these appropriators, or by both.

6. Conflict-resolution mechanisms: Appropriators and their officials have rapid access to low-cost local arenas to resolve conflicts among appropriators or between appropriators and officials.

7. Minimal recognition of rights to organize: The rights of appropriators to devise their own institutions are not challenged by external governmental authorities.

8. (For CPRs that are parts of larger systems) Nested enterprises: Appropriation, provision, monitoring, enforcement, conflict resolution, and governance activities are organized in multiple layers of nested enterprises.
Many but not all of these considerations are addressed in the filing and my comments: The headline of section B that “self-governance has been the hallmark of the growth and success of the Internet” reflects #2. My point about involving consumers speaks to #3. The TAGs mooted in the letter address #4 and #6, but not #5. The purpose of the letter is to achieve #7.

In addition to the lack of sanctions, two other key issues are not addressed. Principle #1 addresses a key requisite for a successful co-regulatory approach: that industry is able to establish clear objectives. Given the vagueness of the principles in the filing, it’s still an open question whether the parties can draw a bright line around the problem.

I believe #8 can help: create a nested set of (self- or co-) regulatory enterprises. While I don’t yet have concrete suggestions, I’m emboldened by the fact that nested hierarchy is also a hallmark of complex adaptive systems, which I contend are a usable model for the internet governance problem. Ostrom’s three levels of analysis and processes offer a framework for nesting (1990:53):
  • Constitutional choice: Formulation, Governance, Adjudication, Modification
  • Collective choice: Policy-making, Management, Adjudication
  • Operational choice: Appropriation, Provision, Monitoring, Enforcement
I think the TAGs are at the collective choice level. It would be productive to investigate the institutions one might construct at the other two levels. The FCC could usefully be involved at the constitutional level; even if one doesn't dive into a full-scale negotiated rule-making or "Reg-Neg", government involvement would improve legitimacy (cf. Principle #7). At the other end of the scale, operational choices include mechanisms not just for monitoring (and some tricky questions about disclosure of "commercially confidential" information) but also enforcement. The latter could be as simple as the threat of reporting bad behavior to the appropriate agency, as the Better Business Bureau’s National Advertising Division does (see Weiser 2008:21 PDF).

When Gorillas Make Nice

Verizon and Google’s recent joint FCC filing about the values and governance of the internet largely echoes the conclusions of a Silicon Flatirons summit in August 2008 (PDF): that self-governing institutions are the best way to manage day-to-day questions of network neutrality, with the government acting as a backstop when market forces and self-regulation fail.

The filing seems to come in two parts: a statement of principles, and a sketch of how self-governance might work. I’ll largely ignore the first part, since clearly Google and Verizon found little to agree on. The three key principles are motherhood (consumer transparency and control), Google’s non-negotiable (openness) and Verizon’s (encouraging investment), respectively; it’s hard to argue with any of this, except to observe that the hard work lies in achieving them simultaneously.

The most useful resource on self-regulation in communications I’ve seen is Ofcom’s 2008 statement on “Identifying appropriate regulatory solutions: principles for analysing self- and co-regulation” (PDF). It concluded that self-regulation is most likely to work when “industry collectively has an interest in solving the issue; industry is able to establish clear objectives for a potential scheme; and the likely industry solution matches the legitimate needs of citizens and consumers.”

If their effort is to succeed, the companies will have to build an institution that represents all interests. Let's stipulate that the three main stakeholder groups are content players, network operators and consumers; Google and Verizon fall in the first two groups. On the network side, they’ll need to add the cable industry (always much more leery of network neutrality than the long-regulated telcos), and on the content side, the studios. The trickiest part will be finding a “consumer voice” with some legitimacy; everybody, not least these companies, claims to have the consumer’s best interest at heart.

The filing is predictably vague about the basis on which government would become involved, and is silent about how its proposed institution would enforce its own norms. That’s a mistake. It’s in the companies’ best interest to declare which sword they want hanging over their heads. If they don’t, there won’t be sufficient incentive to Do the Right Thing in the short term (the CEO will ask, “If I’m not breaking a law, why should I go the extra mile?”), which means that eventually a mountain of punctilious rules will be imposed on them. (It’s my understanding that this is what happened over the last decade with accessibility to the internet for those with disabilities: tech companies promised a decade ago they’d solve the problem, didn’t do all that much, and now Rep. Markey is writing detailed rules.)

It’s not clear to me whether the filing is proposing self- or co-regulation, defined by Ofcom (2008) as follows:

Self-regulation: Industry collectively administers a solution to address citizen or consumer issues, or other regulatory objectives, without formal oversight from government or regulator. There are no explicit ex ante legal backstops in relation to rules agreed by the scheme (although general obligations may still apply to providers in this area).

Co-regulation: Schemes that involve elements of self- and statutory regulation, with public authorities and industry collectively administering a solution to an identified issue. The split of responsibilities may vary, but typically government or regulators have legal backstop powers to secure desired objectives.
I think co-regulation is indicated here. Without a backstop there will not be sufficient incentive for good behavior. Politically, too, the term “self-regulation” has become anathema in Washington DC because the financial melt-down is deemed to have been due to a failure in the same. (Not that it matters, but I think this assessment is incorrect on two counts: self-regulation is only part of a much larger problem in the financial crisis; and even if it weren’t, the lessons learned are not easily transposable to communications policy. Still, it’s probably best to use another term, like shared regulation, supervised delegation or bounded autonomy.)

Wednesday, February 10, 2010

The internet is not an ecosystem, but…

The “internet ecosystem” metaphor is ubiquitous; I’ve used it myself, though with some trepidation. I think I can now reconcile why it’s both wrong and useful.

It’s wrong, strictly speaking, since many aspects of the ecosystem-internet mapping are questionable. As I blogged in 2007 about the “business ecosystem” terminology, the validity of the metaphor is undermined by quite a large number of mapping mismatches:
Number: a food web consists of billions of interactions among animals and plants; a business web comprises a relatively small number of companies

Metrics: Biomass a typical rough measure of an ecosystem; does that map to total revenue, profitability, return on investment, or something else?

Topology: An ecosystem is a lossy, one-way energy flow; as each organism is eaten by the next, energy is lost. Business relationships are reciprocal, and generate value.

Time scales: Species change slowly, but companies can change their role in a system overnight through merger, acquisition or divestiture.

Choice: Interactions between firms can be changed by contract, whereas that between species is not negotiable except perhaps over very long time scales by evolution of defensive strategies.

Foresight: Humans are pre-eminent among animals in their ability to anticipate the behavior of other actors, explore counter-factuals, think through What If scenarios, etc. The response of a system containing humans to some change is therefore much more complex than that of a human-free ecosystem. “Dumb” agents in an adaptive system respond to the change; humans respond to how they think other humans will respond to their response to those people’s responses etc.

Goals: Biological systems don’t have goals, but human ones do. There are no regulatory systems external to ecosystems in a state of nature (if such things still exist on this planet), but there are many, such as rule of law and anti-trust, in human markets. Natural processes don’t care about equity or justice, but societies do, and impose them on business systems. If ecosystems were a good model for business networks, there would be no need for anti-trust regulation.
The connotations of the metaphor are also misleading. Ecosystems are often used to connote stability and vibrant self-regulation; in fact, they often suffer catastrophic collapses. Companies are exhorted to invest in their ecosystem with the goal of becoming a keystone species. It’s not clear why they should do so, from the ecosystem perspective: keystone species don’t typically represent a lot of biomass. Their “bottleneck position”, however, is attractive from the perspective of a company that wants to extract rents through market power.

However, the ecosystem concept has gained traction because there is a deeper truth: both the internet and ecosystems are both examples of complex adaptive systems. (A complex adaptive system may be defined as a collection of interacting, adaptive agents; other examples include the immune system, the human body, stock markets, and economies. Note that adaptive systems are often nested.)

Thus, the internet is to an ecosystem as a whale is to an elephant. It could be useful to think in terms of elephants if one has to manage oceans but doesn’t know much about whales, since both are large, social mammals. However, one can just as well explain whales in terms of elephants – and the differences, e.g. living on land vs. in water – can be decisive in some cases.

With this realization, the utility and limitations of using an ecosystem metaphor when thinking about the internet, as I did in my Internet Governance as Forestry paper, have become much clearer to me. Lessons from managed ecosystems can illuminate the dynamics and pitfalls of managing the internet, and principles (such as the Resilience Principles I outlined in my recent talk at Silicon Flatirons; my presentation starts around time code 01:36:00 of the video) derived from one can be applied to the other.

Monday, February 08, 2010

Resilience and Realpolitik

Resilience is a fashionable meme - rightly so, since it offers an alternative to the "find the efficient optimum" approach to solving problems in political economy. (I would say so, of course; see e.g. my presentation at Silicon Flatirons recently, and my paper on forestry as a metaphor for internet governance.)

As reported by The Economist (A needier era: The politics of global disruption, and how they may change, Jan 28th 2010), a report for the Brookings Institution on international politics in an age of want suggests that Governments should think more in terms of reducing risk and increasing resilience to shocks than about boosting sovereign power.  This is analogous to advocating reducing risk and increasing resilience vs. boosting wealth creation in the economy. The reason given is that the new threats are networks (of states and non-state actors) and unintended consequences (of global flows of finance, technology and so on).

I've seen (and propagated) the same memes in the context technology policy: the determining factors are inter-locking networks of agents, and unintended consequences that shift more quickly than legislation.

It's ironic, given my claim that the resilience approach is a counter to neoclassical economics, that the article closes with a Milton Friedman quote...