Tuesday, October 12, 2010

Who gets the apple?

I’ve been looking for a metaphor to illustrate the weaknesses I see in the FCC’s “you two just go off and coordinate” approach to solving wireless interference problems among operators.

Let's think of the responsibility to bear the cost of harmful interference as an apple.*  It’s as if the FCC says to Alice and Bob, “I've got an apple, and it belongs to one of you. I’m not going to decide which of you should have the apple; you decide among yourselves.”

Now, if Alice were the owner of the apple and valued it at 80 cents, then the answer would simply depend on how much Bob valued having the apple (and rational negotiation, of course). If having an apple was worth 90 cents to him, he’d get it for some price between 80 and 90 cents; if it was worth only 60 cents to him, Alice would keep it. Problem solved.

Trouble is, the FCC doesn’t tell them who actually owns the apple, and even if it did, it doesn’t tell them whether it’s a Granny Smith or a Gala. The odds of Alice and Bob coming to an agreement without going back to the FCC is slim.

The analogy: The FCC’s rules often don’t make clear who’s responsible, in the end, for solving a mutual interference problem (i.e. who owns the apple); and it’s impossible to know short of a rule making by the FCC what amounts to harm (i.e. what kind of apple it is).

-----------------------
* There's always interference between two nearby radio operators (near in geography or frequency).  While the blame is usually laid on the transmitter operator, it can just as reasonably be placed on the receiver operator for not buying better equipment that could reject the interference.

Friday, July 02, 2010

Social network visualizations - an online symposium

My work on the evolution of FCC lobbying coalitions has been accepted in the JoSS (Journal of Social Structure) Visualization Symposium 2010 (link to my entry). Jim Moody of Duke has done a wonderful job collecting a dozen visualizations of social networks. Each is worth exploring; in particular, see the thoughtful comments that the JoSS staff provided to each entry in order to stimulate debate.

Monday, June 07, 2010

How I Learned to Stop Worrying and Love Interference


(With apologies to Stanley Kubrick.)

Radio policy is fixated on reducing or preventing harmful interference. Interference is seen as A Bad Thing, a sign of failure. This is a glass-half-empty view. While it is certainly a warning sign when a service that used to work suddenly fails, rules that try to prevent interference at all costs lead to over-conservative allocations that under-estimate the amount of coexistence that is possible between radio systems.

The primary goal should not be to minimize interference, but to maximize concurrent operation of multiple radio systems.

Minimizing interference and maximizing coexistence (i.e. concurrent operation) are two ends of the same rope. Imagine metering vehicles at a freeway on-ramp: if you allow just one vehicle at a time onto a section of freeway, people won’t have to worry about looking out for other drivers, but very few cars would be able to move around at one time. Conversely, allowing everybody to enter at will during rush hour will lead to gridlock. Fixating on the prevention of interference is like preventing all possible traffic problems by only allowing a few cars onto the freeway during rush hour.

Interference is nature’s way of saying that you’re not being wasteful. When there is no interference, even though there is a lot of demand, it’s time to start worrying. Rather than minimizing interference with the second-order requirement of maximizing concurrent operation, regulation should strive to maximize coexistence while providing ways for operators to allocate the burden of minimizing interference when it is harmful.

I am developing a proposal that outlines a way of doing this. Here are some of the salient points that are emerging as I draft my ISART paper:

The first principle is delegation. The political process is designed to respond carefully and deliberatively to change, and is necessarily slower than markets and technologies. Therefore, regulators should define radio operating rights in such a way that management of coexistence (or equivalently, interference) is delegated to operators. Disputes about interference are unavoidable and, in fact, a sign of productively pushing the envelope. Resolving them shouldn’t be the regulator’s function, though; parties should be given the means to resolve disputes among themselves by a clear allocation of operating rights. This works today for conflicts between operators running similar systems; most conflicts between cellular operators, say, are resolved bilaterally. It’s much harder when dissimilar operations come into conflict (see e.g. my report (PDF) on the Silicon Flatirons September 2009 summit on defining inter-channel operating rules); to solve that, we need better rights definitions.

The second principle is to think holistically in terms of transmission, reception and propagation; this is a shift away from today’s rules which simply define transmitter parameters. I think of this as the “Three P's”: probabilistic permissions and protections.

Since the radio propagation environment changes constantly, regulators and operators have to accept that operating parameters will be probabilistic; there is no certainty. The determinism of today’s rules that specify absolute transmit power is illusory; coexistence and interference only occur once the signal has propagated away from the transmitter, and most propagation mechanisms vary with time. Even though US radio regulators seem resistant to statistical approaches, some of the oldest radio rules are built on probability: the “protection contours” around television stations are defined in terms of (say) a signal level sufficiently strong to provide such a good picture at least 50% of the time, at the best 50% of receiving locations. [1]

Transmission permissions of licensee A should be defined in such a way that licensee B who wants to operate concurrently (e.g. on nearby frequencies, or close physical proximity) can determine the environment in which its receivers will have to operate. There are various ways to do this, e.g. the Australian “space-centric” approach [2] and Ofcom’s Spectrum Usage Rights [3]. These approaches implicitly or explicitly define the field strength resulting from A’s operation at all locations where receivers might be found, giving operator B the information it needs to design its system.

Receiver protections are declared explicitly during rule making, but defined indirectly in the assigned rights. When a new allocation is made, the regulator explicitly declares the field strength ceilings at receivers that it intends to result from transmissions. In aggregate, these amount to indirectly defined receiver protections. Operators of receivers are given some assurance that no future transmission permissions should exceed these limits. (Such an approach could have prevented the AWS-3 argument.) However, receivers are not directly protected, as might be the case if they are given a guaranteed “interference temperature”, nor is there a need to regulate receiver standards.

While this approach has been outlines in terms of licensed operation, it also applies to unlicensed. Individual devices are given permissions to transmit that are designed by regulator to achieve the desired aggregate permissions that would otherwise be imposed on a licensee. Comparisons of results in the field with these aggregate permissions will be used as a tripwire for changing the device rules. If it turns out that the transmission permissions are more conservative than required to achieve the needed receiver protections, they can be relaxed. Conversely, if the aggregate transmitted energy exceeds the probabilistic limits, e.g. because more devices are shipped than expected or they’re used more intensively, device permissions can be restricted going forward. This is an incentive for collective action by manufacturers to implement “politeness protocols” without regulator having to specify them.

Notes

[1] O’Connor, Robert A (1968) Understanding Television’s Grade A and Grade B Service Contours, IEEE Transactions on Broadcasting, Vol. 47, No. 3, September 2001, p. 309, http://dx.doi.org/10.1109/11.969381

[2] Whittaker, Michael (2002) Shortcut to harmonization with Australian spectrum licensing, IEEE Communications Magazine, Vol. 40, No. 1. (Jan 2002), pp. 148-155, http://dx.doi.org/10.1109/35.978062

[3] Ofcom (2007) Spectrum Usage Rights: A statement on controlling interference using Spectrum Usage Rights, 14 December 2007, http://www.ofcom.org.uk/consult/condocs/surfurtherinfo/statement/statement.pdf

Monday, May 31, 2010

Why I give service

I have just returned from working in the kitchen during a course at the Northwest Vipassana Center. During one of the breaks I had a fascinating conversation with one of the center managers, who it turns out experiences service very differently from me. She asked that I record my thoughts, and this is what I came up with.

Serving is no fun – for me, at least. Serving a course is about stress, anxiety and fatigue, with a few happy moments when I wish the meditators well as I pass them by. There’s no joy in doing the work, as there is for some, and no joyful release at the end; only relief that it’s over. It’s pretty much like sitting a course, with the difference that I’m just banging my head against a wooden wall, not a brick one.

So why do I do it?

I do it because I think it’s good for me. Working in the kitchen amplifies my weaknesses, and makes it easier to see when and where I’m being unskillful. I come face-to-face with my frailties and failings, and hopefully end the course with another sliver of wisdom.

I do it because serving is a middle ground between sitting practice and living in the real world. Like developing any skill - think about playing a musical instrument - meditation requires hours of solitary practice every day, over decades. However, that practice is only the means to an end, which is to live better with, and for, others. Serving on a course helps me try out the skills I’m learning in a realistically stressful but safe environment. Things can’t spin too far out of control; I’m back on the cushion every few hours, with an opportunity to reboot and start again. And I’m surrounded by people of good will, with direct access to teachers if I need it.

And I do it to repay, in small part, the debt I owe to all those people whose service have made it possible for me to learn this technique, and sit courses. I was able to sit because someone else was in the kitchen; now it’s my turn.

Wednesday, May 12, 2010

Improving FCC filing metadata

On 10 May 2010 I filed a comment on two FCC proceedings (10-43 and 10-44, if you must know) concerning ways to improve the way it does business. I argued that transparency and rule-making efficiency could be improved by improving the metadata on documents submitted to the Electronic Comments Filing System (ECFS).

I recommended that the FCC:
  • Associate a unique identifier with each filer
  • Require that the names of all petitioners are provided when submitting ECFS metadata
  • Improve RSS feed and search functionality
  • Require the posting of digital audio recordings of ex parte meetings
  • Provide a machine interface for both ECFS search and submission

Opt-in for Memory

The Boucher-Stearns privacy measure tries to do many things (press release; May 3 staff discussion draft); too many, according to Daniel Castro at ITIF.
 
One of the issues it doesn’t tackle – and legislation may or may not be the solution – is the persistence of digital information once it has been collected.

In a NY Times context piece called Tell-All Generation Learns to Keep Things Offline, Laura Holson writes that members of the “tell-all generation” are becoming more picky about what they disclose. There’s growing mistrust of social networking sites, and young people keep a closer eye on their privacy settings than oldsters. Holson reports on a Yale junior who says he has learned not to trust any social network to keep his information private, since “If I go back and look, there are things four years ago I would not say today.”

I expect that this concern will grow beyond information collection to encompass retention. (That's already a big concern of law enforcement, of course.) Explicit posts (photos, status updates) will live forever, if for no other reason than sites like the Internet Archive. However, the linkages that people make between themselves and their friends, or themselves and items on the web, are less explicit – and probably more telling. These links are held by the social network services, and I expect that there will be growing pressure on them to forget these links after some time. Finally, there are the inferences that companies make from these links and other user behavior; their ownership is more ambiguous, since they’re the result of a third party’s observations, not the subject’s actions.

My bet is that norms will emerge (by market pressure and/or regulation) that force companies to forget what they know about us. For the three categories I noted above, it might work something like this:
  1. Posts: Retained permanently by default. Explicit user action (i.e. an opt-out) required for it to be deleted
  2. Linkages: Deleted automatically after a period, say five years. User has to elect to have information be retained (opt-in).
  3. Inferences: Deleted after a period, say five years, if user opts out; otherwise kept. This one is tricky; I can also see good reasons to make deletion automatic with an opt-in for retention.

However these practices evolve, it’s become clear to me that neither the traditional “notice and choice” regime nor the emerging “approve use” approach are sufficient without a mechanism for forgetting.

Tuesday, May 11, 2010

New Ethics as a Second Language

In lecture 27 of the Teaching Company course on Understanding the Brain, Jeanette Norden observes that we seem to learn morality using the same mechanisms we use for learning language.

Newborns can form all the sounds used in all the languages on the planet, but with exposure to their mother tongue they become fluent in a subset. It eventually becomes almost impossible to form some of unused sounds, and the idiosyncrasies of their language seem natural and universal.

This makes me wonder about the difficulties an immigrant might have in learning the peculiarities of a new culture. I’ve definitely been confounded from time to time by unexpected variations in “the right thing to do” – and there’s really very little difference between the culture I grew up in and the ones I moved to as an adult. “Culture shock” may not just be language and customs; it probably involves morality, too, since every system of ethics is a mixture of universals and particulars.

Of course, that’s not to say that one cannot become fluent in an alternative morality. It might just be harder than a native “moralizer”, particularly one who has never had to learn "ethics as a second language”, might assume.

And traditionalists around the world who claim that wall-to-wall American media “corrupt the morals of our youth” are probably right: I'd guess young people pick up the ethical biases of American culture by watching movies and TV even more easily than they pick up English.

Monday, May 10, 2010

Negotiating the Price of Privacy

Kurt Opsahl at EFF’s time line of changes to Facebook’s privacy policies over the last 5 years tells me a story of a shifting power balance. (Thanks to Peter Cullen for the link.)

It’s a quick read, but in a nutshell: in 2005, the user controlled where information went. By December 2009, Facebook considered some information to be publicly available to everyone, and exempt from privacy settings.

I vaguely remember Esther Dyson describing privacy more than two decades ago as a good users would trade. That’s how I read the time line. It’s an implicit negotiation between Facebook and its users over the value of personal information (let’s call it Privacy, for short) vs. the value of the service Facebook provides (call it Service).

In the early days, the service had few users, and the network effect hadn’t kicked in. Facebook needed users more than they needed Facebook, and so Facebook had to respect privacy – it was worth more to users than the Facebook service was:
Service << Privacy
Since the value of a social network grows exponentially as the number of members increases, the value of the service S grew rapidly as membership increased. A user’s perception of the value of privacy didn’t change much; it probably grew a little, but not exponentially. Probably sometime around 2008, the value of the service started overtaking the value of privacy:
Service ≈ Privacy
Facebook’s hard-nosed approach to privacy (or lack of it) makes clear that it now has the upper hand in the negotiation. An individual user needs Facebook more than vice versa:
Service > Privacy
One take-away from this story is that the privacy settings users will accept are not a general social norm, but the result of an implicit negotiation between the customer and supplier. When a supplier becomes indispensable, it can raise its prices, in explicitly ($$) or implicitly (e.g. privacy conditions). Other services therefore should not assume that they can get away with Facebook’s approach. They can make virtue of necessity by offering better privacy protection – at least until the day when their service is so valuable that they, too, can change the terms.

Thursday, April 29, 2010

Non-privacy goes non-linear

I’ve never been able to “get” Privacy as a policy issue. Sure, I can see that there are plausible nightmare scenarios, but most people just don’t seem to care. What a company, or a government, knows about one just doesn’t rate as something to worry about. Perhaps the only angle that might get the pulse racing is identity theft; losing money matters. But no identity theft stories have inflamed the public’s imagination, or mine.

The recent spate of stories about privacy on social networking sites have led me to reconsider – a little. I still don’t think Joe Public cares, but the technical and policy questions of networked privacy intrigue me more than the flow of personal information from a citizen to an organization and its friends.
The trigger for the current round of privacy worries was the launch of Google Buzz. Good Morning Silicon Valley puts it in context with Google, Buzz and the Silicon Tower, and danah boyd’s keynote at SXSW 2010 reviews the lessons and implications.

Mathew Ingram’s post Your Mom’s Guide to Those Facebook Changes, and How to Block Them alerted me to the implications of Facebook’s “Instant Personalization” features.

Woody Leonhard’s article Hotmail's social networking busts your privacy showed that Google and Facebook aren’t the only ones who can scare users about what personal information is being broadcast about them.
I think there may be a profound mismatch between the technical architectures of social networking sites, and the mental model of users.

This mismatch is an example of the “hard intangibles” problem that I wrestled with inconclusively a few years ago: our minds can’t effectively process the complexity of the systems we’re confronted with.

Two examples: attenuation and scale.
We assume that information about us flows more sluggishly there further it goes. My friends know me quite well, their friends might know me a little, and the friends-of-friends-of-friends are effectively ignorant. In a data network, though, perfect fidelity is maintained no matter how many times information is copied. We therefore have poor intuition about the fidelity with which information can flow away from us across social networks.

 It’s a truism that the mind cannot grasp non-linear growth; we’re always surprised by the explosion of compound interest, for example. On a social network, the number of people who are friends-of-friends-of-…-of-friends grows exponentially; but I would bet that most people think it grows only linearly, or perhaps even stays constant. Thus, we grossly underestimate the number of people to whom our activities visible.
My most recent personal experience was when I noticed a new (?) feature on Facebook two days ago. One of my friends had commented on the status update of one of their friends, who is not in my friend network. Not only did I see the friend-of-my-friend’s update; I saw all the comments that their friends (all strangers to me) had made. I’m pretty sure the friends-of-my-friend’s-friend had no idea that some stranger at three removes would be reading their comments.

If you find the “friends-of-my-friend’s-friend” construct hard to parse, then good: I made it on purpose. I suspect that such relationships are related to the “relational complexity” metrics defined by Graeme Halford and colleagues; Halford suggests that our brains max out at around four concurrent relationships.

I’m pretty confident that the Big Name Players all just want to do right by their users; the trouble is that the social networks they’re building for us are (of necessity?) more complicated than we can handle. It hit home when I tried to grok the short blog post Managing your contacts with Windows Live People. I think I figured it out, but (a) I’m not sure I did, and (b) I'd rather not have had to.

Saturday, April 24, 2010

Knowing with the Body

The neurological patient known as Emily cannot recognize the faces of her loved ones, or even herself in a mirror. [1] She doesn’t have conscious awareness that she knows these people, but her body does. When she is shown a series of photos of known and unknown people, she cannot tell them apart; however, the electrical conductance of her skin increases measurably when she’s looking at the face of someone she knows. [2] It’s not that she’s lost the ability to recognize people in general; she can still recognize her family, and herself, by their voices.

Damasio notes that skin-conductance responses are not noticed by the patient. [3] However, perhaps someone who is well-practiced in observing body sensations – for example, a very experienced meditator in the Burmese vipassana tradition [4] – would be able to discern such changes. I suspect so; in which case, a patient like Emily would be able to work around their recognition problem by noting when their skin sensations change. It’s known that they use workarounds; Emily, for example, “sits for hours observing people’s gaits and tries to guess who they are, often successfully.” [5]

Both Damasio’s theory of consciousness and Burmese vipassana place great importance on the interactions between body sensations and the mind. As I understand Damasio, he proposes that consciousness works like this:

1. The brain creates representations (he calls them maps or images) of things in the world (e.g. a face), and of the body itself (e.g. skin conductance, position of limbs, state of the viscera, activity of the muscles, hormone levels).

2. In response to these representations, the brain changes the state of the body. For example, when it sees a certain face, it might change skin conductance; when it discerns a snake it might secrete adrenaline to prepare for flight, tighten muscle tone, etc. Damasio calls these responses “emotions”, which he defines as “a patterned collection of chemical and neural responses that is produced by the brain when it detects the presence of an emotionally competent stimulus — an object or situation, for example.” [6]

3. In a sufficiently capable brain (which is probably most of them) there is a higher order representation that correlates changes in the body with the object that triggered these changes. This second-order map is a feeling, in Damasio’s terminology: “Feelings are the mental representation of the physiological changes that characterize emotions.” [6], [7] Feelings generate (or constitute – I’m not sure which…) what he calls the “core self” or “core consciousness”.
Since I find pictures to be helpful, I’ve created a short slide animation on SlideShare that shows my understanding of this process; click on this thumbnail to go there:



In Emily’s case, steps 1 and 2 function perfectly well, but the correlation between a face and changes in the body fails in step 3. The higher-order correlation still works for voices and body changes, however, since she can recognize people by their speech.

More acute awareness of body sensation might not just help clinical patients like Emily. In the famous Iowa Gambling Task, Damasio and Antoine Bechara showed that showed that test subjects were responding physiologically (again, changes in skin conductance) to risky situations long before they were consciously aware of them. A 2009 blog post by Jonah Lehrer includes a good summary of the Iowa Gambling Task and its results. Lehrer reports new research indicating that people who are more sensitive to “fleshy emotions” are better at learning from positive and negative experiences.

NOTES

[1] This post is based on material in Antonio Damasio’s book The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Harcourt 1999). Emily’s case is described on p. 162 ff. I also blogged about this topic in 2005 after reading “Feeling” for the first time.

[2] Damasio op. cit. [1], p. 300

[3] Ibid.

[4] For example, vipassana as taught by S N Goenka. Other mindfulness meditation traditions also attend to body sensations (see e.g. Phillip Moffitt, “Awakening in the Body”, Shambhala Sun, September 2007) but the Burmese tradition places particular emphasis on it.

[5] Damasio op. cit. [1], p. 163

[6] Damasio, A. (2001) "Fundamental feelings", Nature 413 (6858), 781. doi:10.1038/35101669. Note that Damasio’s definition of “emotion” is narrower than usual usage, which refers to affective states of consciousness like joy, sorrow, fear, or hate. Damasio limits himself to the physiological changes which are more typically considered to be an accompaniment to, or component of, these mental agitations.

[7] Feelings so defined seem to correspond to what S. N. Goenka, a well-known teacher in the Burmese vipassana tradition, calls sensations, his preferred translation of the Pāli term vedanā, also often translated as “feelings”. There is some debate about whether vedanā refers just to sensations-in-the body, as Goenka contends, or to any and all pleasant, painful or neutral feelings such as joy, sorrow, etc.

Friday, April 16, 2010

Bill delegates caller ID regulations to FCC

SiliconValley.com reports that the US House has approved a measure that would outlaw deceptive Caller ID spoofing.

Since I'm currently enamored of a principles-based approach to regulating rapidly changing technology businesses -- that is, policy makers should specify the goals to be achieved, and delegate the means to agents nearer the action -- I'm on the look-out for working examples.

This seems to be one: the bill leaves it up to the FCC to figure out the details of regulation and enforcement.

The FCC itself could delegate further if is so chose, for example by waiting to see if telephone companies come up with effective ways of regulating this problem themselves before trying to devising and imposing its own detailed rules.

Friday, March 26, 2010

Trying to explain the Resilience Principles

I was honored to participate in a panel in DC on "An FCC for the Internet Age: Reform and Standard-Setting" organized by Silicon Flatirons, ITIF and Public Knowledge on March 5th, 2010.  My introductory comments tried to summary the "resilience principles" in five minutes: the video is available on the Public Knowledge event page, starting at time code 02:04:45.  The panel starts at around 01:57:00.

The earlier, fifteen minute pitch I gave on a panel on "The Governance Challenges of Cooperation in the Internet Ecosystem" at the Silicon Flatirons annual conference in Boulder on February 1st, 2010 can be found here at time code 01:36:00. My slides are up on Slideshare.net, and a paper is in preparation for JTHTL.

This work is an outgrowth of my TPRC 2008 paper “Internet Governance as Forestry” (SSRN).

Saturday, March 06, 2010

Obviating mandatory receiver standards

Two remarks I heard at a meeting of a DC spectrum advisory committee helped me understand that endless debates about radio receiver standards are the result of old fashioned wireless rights definitions. The new generation of rights definitions could render the entire receiver standards topic moot.

First, a mobile phone executive explained to me that his company was forced to develop and install filters in the receiver cabinets used by broadcasters for electronic newsgathering because it had a “statutory obligation to protect” these services, even though they operated in different frequency ranges.

Second, during the meeting the hoary topic of receiver standards was raised again; it’s long-rehearsed problem that shows no sign of being solved. It’s a perennial topic because wireless interference depends as much on the quality of the receiver as the characteristics of the transmitted signal. A transmission that would be ignored by a well-designed receiver could cause severe degradation in a poor (read: cheap) receiver. Transmitters are thus at the mercy of the worst receiver they need to protect.

A statutory obligation to protect effectively gives the protectee a blank check; for example, the protectee can change to a lousy receiver, and force the transmitting licensee to pay for changes (in either their transmitters or the protectee’s receivers) to prevent interference. This is an open-ended transfer of costs from the protectee to the protector.

The protectors thus dream of limiting their downside by having the regulator impose receiver standards on the protectee. If the receiver’s performance can be no worse than some lower limit, there is a limit on the degree of protection the transmitter has to provide.

The problem with mandatory receiver standards is that it gets the regulator into the game of specifying equipment. This is a bad idea, since any choice of parameters (let alone parameter values) enshrines a set of assumptions about receiver design, locks in specific solutions, and obviates innovation that might solve the problem in new ways. Manufacturers have always successfully blocked the introduction of mandatory standards on the basis that they constrain innovation and commercial choice.

An open-ended statutory obligation to protect therefore necessarily leads to futile calls for receiver standards.

One could moot receiver standards by changing how wireless rights are defined. Rather than bearing an open-ended obligation to protect, the transmitter should have an obligation to operate within specific limits on energy delivered into frequencies other than their own. These transmission limits could be chosen to ensure that adjacent receivers are no worse off than they were under an “open-ended obligation to protect” regime. (The “victim” licensee will, though, lose the option value of being able to change their system specification at will.)

The main benefit is certainty: the recipient of a license will know at the time of issue what kind of protection they’ll have to provide. The cellular company mentioned above didn’t find out until after the auction how much work they would have to do to protect broadcasters since nobody (including the FCC) understood how lousy the broadcasters’ receivers were.

The regulatory mechanisms for doing this are well known, and have been implemented; they include the “space-centric” licensing approach used in Australia (PDF), and Spectrum Usage Rights (SURs) in the UK.

Moving to new rights regimes is a challenging; Ofcom’s progress has been slow. One of the main difficulties is that licensees for new allocations prefer to do things the old, known, way. One of the supposed drawbacks of SURs is that the benefits of certainty seem to accrue a licensee’s neighbor, rather than the new licensee themselves. However, removing the unlimited downside in an open-ended obligation to protect adjacent operations should prove attractive. The whining will now come from the neighbors who will lose their blank check; careful definition of the licensee’s cross-channel interference limits to maintain the status quo should take the sting out of the transition.

Friday, February 26, 2010

Engineers, Commissars and Regulators: Layered self-regulation of network neutrality

My post Ostrom and Network Neutrality suggested that a nested set of self- or co- regulatory enterprises (Ostrom 1990:90) could be useful when designing regulatory approaches to network neutrality, but I didn’t give any concrete suggestions. Here’s a first step: create separate arenas for discussing engineering vs. business.

One’s immediate instinct when devising a shared regulatory regime (see the list of examples at the end) might be to involve all the key players; at least, that’s what I pointed to in When Gorillas Make Nice. However, I suspect that successful self-regulatory initiatives have to start with a relatively narrow membership and scope: typically, a single industry, rather than a whole value chain. That’s the only way to have a decent shot at creating and enforcing basic norms. Legitimacy will require broadening the list of stakeholder, but too many cooks at the beginning will lead to kitchen gridlock.

Let’s stipulate for now that the key problem is defining what “acceptable network management practices” amount to. Most participants in the network neutrality debate agree that ISPs should be able to manage their networks for security and efficiency, even if there is disagreement about whether specific practices are just good housekeeping or evil rent-seeking.

The engineering culture and operating constraints of different networks are quite distinct: phone companies vs. cable guys; more or less symmetrical last mile pipes; terminating fiber in the home vs. at cabinet; and not least, available capacity in wireline vs. wireless networks. Reconciling these differences and creating common best practices within the network access industry will be hard; that’s the lowest layer of self-regulation. The “Engineers” should be tasked with determining the basic mechanisms of service provision, monitoring compliance with norms, and enforcing penalties against members who break the rules.

The core participants are the telcos (e.g. Verizon, AT&T) and cable companies (e.g. Comcast, Time Warner Cable), in both their wireline and wireless incarnations. Only within a circumscribed group like this is there is any hope of detailed agreement about best practices, let alone the monitoring and enforcement that is essential for a well-functioning self-regulatory organization. Many important network parameters are considered secret sauce; while engineers inside the industry circle can probably devise ways monitor each other’s compliance without giving the MBAs fits, there’s no chance that they’ll be allowed to let Google or Disney look inside their network operating centers.

The next layer of the onion adds the companies who use these networks to deliver their products: web service providers like Google, and content creators like Disney. Let’s call this group the “Commissars”. This is where questions of political economy are addressed. The Commissars shape the framework within which the network engineers decide technical best practices. It’s the business negotiation group, the place where everybody fights over dividing up the rents; it needs to find political solutions that reconcile the very different interests at stake:

  1. The ISPs want to prevent regulation, and be able to monetize their infrastructure by putting their hand in Google’s wallet, and squeezing content creators.
  2. Google wants to keep their wallet firmly shut, and funnel small content creators’ surplus to Mountain View, not the ISPs.
  3. Large content creators want to get everybody else to protect their IPR for them.
  4. New content aggregators (e.g. Miro) want a shot at competing in the video business with the network facility owners.
This is not an engineering argument, and a Technical Advisory Group (TAG) along the lines described by Verizon and Google (FCC filing) would not be a suitable vehicle for addressing such questions. The Commissars are responsible for answering questions of collective choice regarding the trade-offs in network management rules, and adjudicating disputes that cannot be resolved by the Engineers among themselves.

The Engineers can work in parallel to the Commissars, and don’t need to wait for the political economists to fight out questions about rents; in any case, it will be helpful for the Commissars to have concrete network management proposals to argue about. There will be a loop, with the conclusions of one group influencing the other. The Commissars inform the Engineers about the constraints on what would constitute acceptable network management, and the Engineers inform the Commissars about what is practical.

Finally, government actors – call them the “Regulators” – set the rules of the game and provide a backstop if the Engineers and Commissars fail to come up with a socially acceptable solution, or fail to discipline bad behavior. Since the internet and the web are critical infrastructure, governments speaking for citizens are entitled to frame the overall goals that these industries should serve, even though they are not well qualified to define the means for achieving them. Final adjudication of unresolved disputes rests with the Regulators.

References

Ofcom, Initial assessments of when to adopt self- or co-regulation, December 10, 2008,
http://www.ofcom.org.uk/consult/condocs/coregulation/condoc.pdf

Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action, Cambridge University Press, 1990

Philip J. Weiser, Exploring Self Regulatory Strategies for Network Management: A Flatirons Summit on Information Policy, August 25, 2008,
http://www.silicon-flatirons.org/documents/publications/summits/WeiserNetworkManagement.pdf

Examples of self- and co-regulatory bodies

The Internet Watch Foundation (IWF) in the UK works to standardize procedures for the reporting and taking-down of abusive images of children. It was established in 1996 by the internet industry to allow the public and IT professionals to report criminal online content in a secure and confidential way. (Ofcom 2008:9, and IWF)

The UK “Classification Framework” for content on mobile phones is provided by the Independent Mobile Classification Body (IMCB) with the aim of restricting young people’s access to inappropriate content. It is the responsibility of content providers to self-classify their own content as “18” where appropriate; access to such content will be restricted by the mobile operators until customers have verified their age as 18 or over with their operator. (Ofcom 2008:9, and IMCB)

The Dutch organization NICAM (Nederlands Instituut voor de Classificatie van Audiovisuele Media) administers a scheme for audiovisual media classification. It includes representatives of representatives of public and commercial broadcasters, film distributors and cinema operators, distributors, videotheques and retailers. (Ofcom 2008:9, and NICAM)

Amateur radio service and frequency coordinators provide examples of self-regulation in spectrum policy. The American Radio Relay League (ARRL) has an understanding with the FCC that it manages the relevant enforcement activities related to the use of ham radio. Only in the most egregious cases will ARRL report misbehavior to the FCC Enforcement Bureau. (Weiser 2008:23)

The Better Business Bureau’s National Advertising Division (NAD) enforces US rules governing false advertising, using threats of referrals to the FTC to encourage compliance with its rules. (Weiser 2008:24, and NAD)

US movie ratings are provided by a voluntary system operated by the MPAA and the National Association of Theater Owners.

Friday, February 12, 2010

Ostrom and Network Neutrality

My previous post scratched the surface of a self-regulatory solution to network neutrality concerns. While this isn’t exactly a common pool resource (CPR) problem, I find Elinor Ostrom’s eight principles for managing CPRs are helpful here (Governing the Commons: The evolution of institutions for collective action, 1990).

Jonathan Sallet boils them down to norms, monitoring and enforcement, and that’s a good aide memoire. It’s useful, though, to look at all of them (Ostrom 1990:90, Table 3.1):
1. Clearly defined boundaries: Individuals of households who have rights to withdraw resource units from the CPR must be clearly defined, as must the boundaries of the CPR itself.

2. Congruence between appropriation and provision rules and local conditions: Appropriation rules restricting time, place, technology, and/or quantity of resource units are related to local conditions and to provision rules requiring labor, material, and/or money.

3. Collective-choice arrangements: Most individuals affected by the operational rules can participate in modifying the operational rules.

4. Monitoring: Monitors, who actively audit CPR conditions and appropriator behavior, are accountable to the appropriators or are the appropriators.

5. Graduated sanctions: Appropriators who violate operational rules are likely to be assessed graduated sanctions (depending on the seriousness and context of the offense) by other appropriators, by officials accountable to these appropriators, or by both.

6. Conflict-resolution mechanisms: Appropriators and their officials have rapid access to low-cost local arenas to resolve conflicts among appropriators or between appropriators and officials.

7. Minimal recognition of rights to organize: The rights of appropriators to devise their own institutions are not challenged by external governmental authorities.

8. (For CPRs that are parts of larger systems) Nested enterprises: Appropriation, provision, monitoring, enforcement, conflict resolution, and governance activities are organized in multiple layers of nested enterprises.
Many but not all of these considerations are addressed in the filing and my comments: The headline of section B that “self-governance has been the hallmark of the growth and success of the Internet” reflects #2. My point about involving consumers speaks to #3. The TAGs mooted in the letter address #4 and #6, but not #5. The purpose of the letter is to achieve #7.

In addition to the lack of sanctions, two other key issues are not addressed. Principle #1 addresses a key requisite for a successful co-regulatory approach: that industry is able to establish clear objectives. Given the vagueness of the principles in the filing, it’s still an open question whether the parties can draw a bright line around the problem.

I believe #8 can help: create a nested set of (self- or co-) regulatory enterprises. While I don’t yet have concrete suggestions, I’m emboldened by the fact that nested hierarchy is also a hallmark of complex adaptive systems, which I contend are a usable model for the internet governance problem. Ostrom’s three levels of analysis and processes offer a framework for nesting (1990:53):
  • Constitutional choice: Formulation, Governance, Adjudication, Modification
  • Collective choice: Policy-making, Management, Adjudication
  • Operational choice: Appropriation, Provision, Monitoring, Enforcement
I think the TAGs are at the collective choice level. It would be productive to investigate the institutions one might construct at the other two levels. The FCC could usefully be involved at the constitutional level; even if one doesn't dive into a full-scale negotiated rule-making or "Reg-Neg", government involvement would improve legitimacy (cf. Principle #7). At the other end of the scale, operational choices include mechanisms not just for monitoring (and some tricky questions about disclosure of "commercially confidential" information) but also enforcement. The latter could be as simple as the threat of reporting bad behavior to the appropriate agency, as the Better Business Bureau’s National Advertising Division does (see Weiser 2008:21 PDF).