Thursday, October 01, 2009

Spectrum Databases (Not)

The idea of using a database to regulate radio operation – or, to “control access to spectrum”, to use the S-word nomenclature – has been gaining ground.

For example, both Michael Calabrese (The End of Spectrum ‘Scarcity’, New America Foundation Wireless Future Program Working Paper No. 25, June 2009) and Kevin Werbach (Castle in the Air: A Domain Name System for Spectrum, TPRC September 2009) have argued that the database(s) contemplated to manage device operation in the TV white spaces could be the foundation for a method to increase the amount of radio operation.

Thinking through how such a database might be used shows the advantage of approaching radio regulation as coordinating operations, rather than using conventional approach of “dividing up spectrum”.

The regulatory challenge is therefore not "spectrum databases" but "radio operation databases".

In a first approximation – and perhaps even as the ultimate solution, if one uses the “spectrum” approach – a database would be a listing of “vacant” channels; a device would query the database for “available” channels, and operate in one. When one starts from the basis that spectrum is an asset like land to be divided up and distributed, vacancy is a self-evident concept; it derives from the attributes of the underlying asset, and not by reference to the intended use.

However, context is everything in radio operation. Whether harmful interference will result from the operation of an added radio system depends not only on its transmissions, but also the transmit and receive characteristics of the incumbent system.

Consider, for example, three channels: A, B, and C. Let’s say incumbent system #1 operates using channel A. Channels B and C are nominally vacant. Can an incoming system #2 operate in those channels? If both system #1 and #2 use traditional cellular technology (i.e. FDM, e.g. 3G), the answer is yes. But if #1 uses 3G and #2 uses TDM technology like WiMAX, then the answer is No: there needs to be a guard band between them, and system #2 can only use Channel C. Channel B needs to be left “vacant”. (This is a live issue: see e.g. Ars Technica on the argument between T-Mobile and M2Z over the rules for the AWS-3 band band.)

A mental model informed by spectrum-as-land is therefore not an ideal guide to understanding what needs to be in the database. (More generally, one needs to refine the metaphor to better guide regulation, as Weiser and Hatfield did last year by introducing the concept of "zoning the spectrum" in Spectrum Policy Reform and the Next Frontier of Property Rights, 15 Geo. Mason L. Rev. 549.)

An approach grounded in coordinating operations, on the other hand, leads to the understanding that what needs to be in the database is not just a frequency range and geographic region, but all the relevant parameters of an incumbent operation. The short list would add receiver performance (ability to reject interference) and duty cycle (near-constant transmission like cellular systems, vs. very intermittent but intense uses like firefighting) to the usual suspects of transmitter location, emitted power, and transmit mask.

The task is not to find a “vacant channel”, but to determine if an incoming operator will cause harmful interference. This requires, in addition to the operating parameters of the incumbent and incoming systems, information about the spatial distribution of incumbent and incoming radios, and a propagation model to connect the two.

Ofcom is the regulator that has thought most deeply about ways to better characterize the interference characteristics of radio systems; see e.g. Ofcom’s Guide to Spectrum Usage Rights (SURs) and William Webb’s recent paper Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms.

A well-founded framework for generalizing the white space database – where interference management between incumbents and new entrants hard-coded into the FCC rules for white space device operation – could benefit from new radio operating metaphors (grind axe: see my De-situating spectrum: Rethinking radio policy using non-spatial metaphors, DySPAN 2008) and the application of a SUR-like approach.

One will also have to think carefully about the minimal set of parameters needed to facilitate interference avoidance, since it's easy but economically inefficient to come up with a very long list of attributes that describe radio operations. The Silicon Flatirons Center recently examined this issue in a summit on defining out-of-band operating rules.

Tuesday, September 29, 2009

No more S-word

I’ve undertaken a language challenge: can one talk about radio policy without using the word “spectrum” (except in quotes)?

My guess is that I don’t need to use the S-word at all; that it can be replaced with simple terms rather than clumsy paraphrases; and that the clarity of a text, and the understanding of the reader, will be greatly improved if one avoids it.

I first raised this possibility in the DySPAN 2008 paper De-Situating Spectrum: Rethinking Radio Policy Using Non-Spatial Metaphors where I recommended a “restatement of wireless policy in terms of system operation rather than spectrum.”

The word “spectrum” has many meanings, depending on context. In policy documents it’s usually short-hand for the topic dealt with by radio regulators, aka their “the object of governance”. There are a range of intended meanings, including a radio license; a range of frequencies; or all the parameters (frequency, geography, transmit power mask, allowed use, single or paired bands, etc.) associated with a radio license.

Engineers use “spectrum” to refer to a range of frequencies, or sometimes to electromagnetic phenomena. Less frequently – curiously, since this is closest to the dictionary definition – they use it to refer to the distribution of electromagnetic energy that results from radio operation.

In short, the following substitution covers most cases:
FOR spectrum SAY radio license OR radio operation OR frequencies
The S-word is also used in various combinations; here are some translations:
FOR acquire spectrum SAY acquire permissions

FOR sharing spectrum SAY coordinating operation

FOR use of spectrum SAY operation of radios

FOR spectrum rights SAY rights to operate

FOR spectrum allocation (noun) SAY license type

FOR spectrum allocation (verb) SAY deciding use

FOR spectrum assignment (verb) SAY authorizing a radio operator

FOR Dynamic Spectrum Access ("DSA") SAY dynamic radio operation

FOR stockpiling spectrum SAY stockpiling licenses

FOR demand for spectrum SAY demand for licenses

FOR manage spectrum SAY manage radio operation

FOR improve the efficiency of spectrum use SAY increase concurrent radio operation

FOR a chunk of spectrum SAY operations concentrated in a band
The value of more precise terminology becomes obvious when one looks through this list. One can distinguish between two distinct referents of “spectrum”: the parameters of operation of radios, and the rights to operate. It’s a distinction between assets and operations. One can easily put radio licenses (“spectrum”) on a balance sheet, but not the institutional and technological ways of coordinating radio operation (“spectrum”).

But why bother? An obvious retort is that this is just nitpicking: “Everybody knows what they mean by the word in a given context.” I argue that it’s important because the connotations of words matter, and change how we see the world.

Constant, thoughtless use of the S-word without teasing apart its meanings creates a thing: “the spectrum”. We come to accept as real the illusion that we’re dealing with a concrete thing (like bushels of corn) rather than the behavior of devices and their owners. If one takes away the radios, “spectrum” as an object of governance ceases to exist, although “spectrum” in the sense of “electromagnetic phenomena” persists. This illusion leads to the fallacy that “spectrum” can be counted like bushels of corn, whereas it is in fact a regulated socio-technical arrangement. It leads to the fallacy that “spectrum” can be counted, and that it is permanently divisible and inalienable.

I’m not denying that interference can occur between radio systems, nor that property rights can facilitate the coordination of radio operation. Rather, I’m suggesting that “spectrum” leads too easily to important conclusions that need to be considered more deeply, such as that wireless licenses are necessarily exclusive rights to operate in fixed frequency ranges. A focus on the behavior of radio systems, which changes constantly as technology and institutions evolve, rather than some spectrum-as-thing can produce a more robust and efficient way to coordinate radio operation.

Thursday, September 24, 2009

A cybersecurity taxonomy

I recently chaired a panel on “Cybersecurity and Digital Identity” at a USC roundtable preparing for the APEC Ministerial in 2011.

The links between cybersecurity, digital identity and trade were not immediately obvious to me, and since security isn’t an area where I can even pretend to have expertise, it forced me to think through the topic from the ground up.

I ended up reframing the topic as “the protection of assets in the digital age.” Not “digital assets”, although some assets are undoubtedly digital. Some concrete assets have digital dimensions: for example, a compromised SCADA system can deprive a city of its water supply. This is a new risk because the use of standardized/open solutions and the growing internet connections between SCADA systems and office networks has made them more vulnerable to attack. And while a person’s reputation isn’t digital as such, information technology has changed how reputations are constructed, disseminated, and need to be protected.

The next step is to categorize the assets that need to be protected, and for that one can consider various attributes. One useful categorization is the motive for threatening assets; I submit that Sex, Money, and Power are the three important motivations (in all things!).

Sex is about status – high status improves reproductive success; into this category would fall hackers who build exploits to show their prowess, and people who want to build a digital persona.

Money refers to economic motivations, whether protecting intellectual property rights in content through encryption, or building botnets for fraud or blackmail.

Power is perhaps least talked about until recently: it’s the pursuit of national interest through IT, e.g. “cyberwar”. The assets in question include critical national infrastructure, and sensitive intelligence.
As an alternative nomenclature to sex, money, and power, one might think of Fame, Fortune, and Foreign Affairs.

The motivations of sex, money, and power can be mapped against another categorization, that of the asset context. In increasing order of scale, the contexts are the personal, the corporate, and the national (aka social, commercial, and political). (However, note that global corporations actually operate at both a national and transnational scale.)

With these two categorizations, one can then plot topics on a handy grid (apologies about formatting; I haven't grokked how to import tables into blogger):


Context

Personal

Corporate

National

Assets

Reputation

Money, goods

Personal safety

Reputation, brand

Intellectual property

Tangible assets

Employee & customer safety

Business continuity

Critical infrastructure

Intelligence

State assets, incl. military

Political power structures

National wealth

Sex (aka fame)

Privacy

Harassment

Defamation

Brand hijacking, web defacement

Embarrassment

Threats (by motive for attack)

Money (aka fortune)

Fraud

Identity theft

Botnet recruiting

Theft of goods

Appropriation of know-how

Diverted compute capacity

Extortion

Advantage national champions

Create non-tariff barriers

Power (aka foreign affairs)

Suppression of speech, access

Appropriation of IPR

Reduce ability to compete

Intelligence gathering

Degrading infrastructure & assets

Demoralizing populations



A few notes:

Threats to assets come in various flavors, notably appropriation, destruction, and constraint of use

Trade occurs both within and between columns, that is, between individual people and between individual companies, as well as between people and companies. Likewise, at a different resolution scale, between nations.

It helps to distinguish between the What vs. the How. Security doesn’t appear explicitly in the table, and neither does digital identity; both are means to end (“how) of protecting assets (“what”). Other means (they do overlap) include encryption, digital rights management, and norms, rules and treaties.

Friday, September 18, 2009

Resource management, spectrum, and cyborg fish

One runs out of spectrum the way you run out of patience. While there is a supply of it, metaphorically speaking, in practice it’s a matter of the behavior of agents (radios) in a given context – just as patience is a matter of how one person, with certain proclivities, reacts to the actions of another. The outcome depends as much on the attributes of the agents as the details of the context, and both are constantly changing.

“Spectrum” is a useful fiction that is used to talk about the management of radio operation. As I argued in De-situating Spectrum: Non-spatial metaphors for wireless communication, this usage is grounded in the spectrum-as-space metaphor, and in particular the spectrum-as-land variant. However, there are other viable metaphors for radio operation, such as thinking about coordinating radio regulation as similar to trademark: an infringing trademark that confuses a customer is like one radio that causes interference in another. A radio license is like a trademark; it enables one to prevent harmful interference by others.

However, the spectrum-as-land metaphor is pervasive. It appears in debates about how best to “manage spectrum”. Opponents of unlicensed , for example, claim that it leads to a tragedy of the commons. However, there is no tragedy of the commons with Wi-Fi, because there is no commons; that is, there is no underlying, essential resource that bears any resemblance to land in Hardin’s classic analysis, or as far as I can see, to what either lawyers or economist mean by “a resource” (assuming you can pin them down long enough to extract a definition). One can measure land in acres, and sheep in heads. A moment’s thought will confirm that there is no agreed way to measure “spectrum”: it’s not MHz, not MHz/m^2, not MHz.m^2, not watt/MHz or watt.MHz or watt.MHz/m^2 or watt.MHz/bps.m^2 or … And this doesn’t even begin to address the role of receivers.

I suspect that the rise of the spectrum metaphor is related the entry of economics into radio regulation. In order for (neoclassical) economics to get purchase on a topic, it must have (or create) a thing to be traded; and since economics is the allocation of resources under scarcity, this thing must be a scarce resource. Just introducing radio licenses don’t seem to have been sufficient to operationalize radio operation in economic terms; there need to be something which is licensed, or to which access is licensed. “Spectrum” fits the bill; it’s reasonably stable, and given the spectrum-as-land metaphor, sufficiently thinglike.

An interesting analogy is the creation of individual transferable quotas (ITQs) detailed in Petter Holm’s “Which Way is Up on Callon?” in Do Economists Make Markets (2007), edited by Donald MacKenzie, Fabian Muniesa, and Lucia Su. According to Holm, the ITQ model is an invention of neoclassical economics whose adoption was facilitated by the rise of fisheries resource management. As he tells it, it’s a complicated story “about the construction and stabilization of a heterogeneous network, tying the fish in with fishermen, echo integrators, log books, legislation, computers, bureaucracies, mathematical formulas, and surveillance procedures. It is within such a network that that the fish-as-fit-for-management springs to life, as a true cyborg: part nature, part text, part computer, part symbol, part human, part political machine.” [1] [2]

The analogy to “spectrum” is pretty clear – the development of spectrum analyzers, the measurement program exemplified by the “spectrum occupancy maps” produced by SSC, NAF and others, and the stock analyst’s $/MHz.POP metric for analyzing auction results serve to construct “spectrum” as a manageable object of governance. [3]

I haven’t nailed down when spectrum entered the regulatory lexicon. It’s not used at all in the 1912 Radio Act, which mentions only “wave lengths”. The 1934 Act only refers to frequencies. However, Coase’s “Federal Communications Commission” paper (1959) cites Mr. Justice Frankfurter’s 1934 opinion in National Broadcasting Co. v. United States (319 U.S. 190, cited in Coase 1959), where the spectrum-as-space seems well established: “the radio spectrum simply is not large enough to accommodate everybody.” Frankfurter also articulates the resource notion underpinning subsequent development of the spectrum concept: “The facilities of radio are limited and therefore precious; they cannot be left to wasteful use without detriment to the public interest.” Spectrum-as-a-spatial-resource is therefore a pre-WW II concept; it evidently didn’t flow from the Coasian economization of radio, but I believe they reinforced each other. [4]

One can rightly ask why, given that the spectrum-as-land notion was well established in the Thirties, and Coase advanced the property rights case in 1960, it took another 30 years for the “propertization of spectrum” to be realized. I suspect that RF spectrum analyzers and broadband radio technology converged to bring about the “auction moment” in the 90s. It took time for both the analytical (game theory, law & economics) and engineering (real-time RF spectrum analyzers [5], commodity spread spectrum technologies) concepts to mature. They’re now well entrenched – and my quest to undermine the primacy of spectrum as an object of governance may be quixotic.

Notes

[1] Holm’s analysis is grounded in Callon and Latour’s Actor-Network Theory, which has been used to make the argument that economists “perform” the economy, that is, that economic models not only describe economic reality, but in part constitute it.

[2] Amusingly, it took a couple of trenchant critics of Callon to show how internecine politics in academic economics shaped the FCC’s spectrum auctions (Mirowski, P. and Nik-Kah, E. (2007) “Markets Made Flesh: Callon, Performativity, and a Crisis in Science Studies, Augmented with Consideration of the FCC Auctions,” in D. MacKenzie, F. Muniesa and L. Siu (eds.) Do Economists Make Markets? On the Performativity of Economics.)

[3] William Boyd introduced me to the notion of “objects of governance”.

[4] There seems to be tension in the law & economics community about this. Coase is a “property is a bundle of rights” man, which is decried in (2001) “What Happened to Property in Law and Economics?” (Thomas W Merrill and Henry E Smith, Yale Law Journal, 2001, Vol. 111, p. 357). Merrill & Smith advocate in rem rights that are referred back to things, as opposed to in personam rights that are arbitrary contractual arrangements. Thomas Hazlett has picked up on Merrill & Smith, and very much would like to treat spectrum as a thing which can be licensed unproblematically – but Coase got his rebuttal in at the start: “... what is being allocated by the Federal Communications Commission ... is the right to use a piece of equipment to transmit signals in a particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies or the ether.”

[5] Real-time spectrum analyzers for acoustic studies were available by 1957. Sri Welaratna’s "30 years of FFT Analyzers", in Sound and Vibration (January 1997, 30th anniversary issue) gives a historical review of hardware spectrum-analyzer devices; again, these are focused on vibration studies.

Tuesday, August 25, 2009

Spectrum nominalism

James Franklin’s discussion of nominalism vs. realism on the Philosopher’s Zone struck me as relevant to my obsession with “spectrum” as a concept.

“To be realist about some concept is to say that there is such a thing, and it's not just made up by us, whereas to be nominalist, is to say it's just a way of speaking of ours, from the Latin 'nomen' word, just an empty sign. So for example, in the case of forces I was arguing for realism about forces. When you felt them by pressing the fingers together, you would naturally conclude from that that there is such a thing as forces. On the other hand you'd never be tempted to do that with something like the average Londoner. Scientists tell you that the average Londoner has 2.3 children; you're not tempted to think that that's anything except a way of speaking, that there's some individual entity called the average Londoner, that has 2.3 children. So it would be natural to take a realist view of forces, but a nominalist view of the average Londoner. There's this question about all the entities talked about in science. A classic case is numbers, so that's a very difficult one. Are there such things out there as numbers or are they just a way of speaking about the divisibility of things into parts or something?”
I’m a nominalist about spectrum: I believe “spectrum” is just a way of speaking and does not have a referent in world. Most people, on the other hand, seem to be knee-jerk realists – “Of course there's such a thing as spectrum!” – though often when you start digging they become nominalists: “Of course, I don’t just mean frequency, there are lots of other factors…”

This fine philosophical distinction matters: If one takes the realist position, you behave as if there is a resource (spectrum) to be divided up and allocated, which leads ineluctably to radio licenses defined by hard frequency boundaries.

The nominalist perspective offers another way to thinking about the situation – for example, coordinating the operation of radio systems – that is just as valid. From a nominalist perspective the “coordination” view is just as (in)valid a perspective as “resource” view, and radio licenses don’t have to be defined primarily in terms of frequency and geography.

(See also my earlier post “Newton, Leibnitz and the (non?)existence of spectrum”. There the distinction was between the “absolutist” position that time and space are real objects in themselves, or the “relationalist” position that they are merely orderings upon actual objects that do not exist independently of the mind that is making the ordering.)

Saturday, August 15, 2009

How many poor people are there?

Scott Forbes linked me to a thought-provoking 2005 article titled "How not to count the poor"by Sanja Reddy and Thomas Pogge at Columbia University (PDF). The bottom line is that the simple question of how many poor people there are in the world is surprisingly hard to answer.

Reddy & Pogge argue that "[t]he World Bank’s approach to estimating the extent, distribution and trend of global income poverty is neither meaningful nor reliable. The Bank uses an arbitrary international poverty line that is not adequately anchored in any specification of the real requirements of human beings. Moreover, it employs a concept of purchasing power "equivalence" that is neither well defined nor appropriate for poverty assessment. . . In addition, the Bank extrapolates incorrectly from limited data and thereby creates an appearance of precision that masks the high probable error of its estimates." Furthermore: "There is some reason to think that the distortion is in the direction of understating the extent of income poverty."

(A rebuttal by Mark Ravallion at the Bank can be found here.)

Their alternative: construct poverty lines in each country using a "common achievement interpretation". Such poverty lines would use the local costs of achieving universal, commonly specified ends like being adequately nourished. (Ravallion argues this is pretty much what countries already to do create national poverty lines.)

Reddy & Pogge argue that such poverty lines "would have a common meaning across space and time, offering a consistent framework for identifying the poor. As a result, they would permit of meaningful and consistent inter-country comparison and aggregation."

The catch seems to be that such an approach requires one "to carry out on a world scale an equivalent of the poverty measurement exercises conducted regularly by national governments, in which poverty lines that possess an explicit achievement interpretation are developed." This is difficult politically, since a common core conception of poverty will have to be agreed, and financially, since local poverty commissions in each country would have to be funded to construct and update poverty lines over time.

The authors don't claim that their metric would lead to substantially different, or better, policies. Better then, perhaps, to spend money on poverty-focused development assistance rather than improving the metrics. However, the Bank should be more honest about the flakiness of its numbers by at least not reporting them "with six-digit precision".

Saturday, August 01, 2009

Is it, or isn’t it?

Humans are inveterate classifiers. We can’t help ourselves, it seems: we just have to put things in hard-edged categories. Computing might help to blur the edges in a useful way.

An update on the Pluto controversy in New Scientist is a case in point. Discoveries of exoplanets and the anticipation of Earth-size objects in the Kuiper belt make the argument increasingly irrelevant, but yet even professional astronomers seem caught up in arguing for one definition of planets or the other.

Sensory systems like ours are complicated webs of classifiers: whether objects are moving or still, whether movements are animal-like or not, whether something is a face, whether a sound is speech or music, whether someone is a member of our group or not, and endlessly on. Categorization is innate and unavoidable.

But once embedded in culture, it can quickly spiral out into fraught territory. Problems arise because classification has consequences, often monetary, often political. Is that bond AAA or AA? Is that car a clunker or not? Is so-and-so in a special group, or not? Is that judge biased?

The difficulty arises because there are so many parameters that could be used for any classification; people argue about which parameters should count. Does roundness a planet make, or size, or not orbiting around another one, or having swept its orbit clear of other rocks? Cognitive limitations (the four-or-less rule, see e.g. Halford et al. (2004), “How many variables can humans process?” Psychological Science, 16, 70-76) mean that we end up picking a few criteria from the many – too few. And then we require that each criterion must yield a yes/no result, which even for hard science classifications can be contentious: what does it mean for a planet candidate to have “a nearly round shape”?

Computing can help by allowing many more criteria into the mix, and allowing them to vary continuously. This is an application of Edward Tufte’s design strategy “to clarify, add detail,” which he introduces in Envisioning Information (1990, p. 37) with the example of The Isometric Map of Midtown Manhattan. Human nature means we may be a little uneasy with the result, but perhaps we can learn to live with it; most people are comfortable nowadays with weather forecasts that say there’s a 50% chance of rain tomorrow (although many may not actually understand what it means ...).

Hiding the criteria has its own dangers. As Bowker and Star argue in Sorting things out: classification and its consequences (1999), any classification encodes a world view, and even “simple” classification systems succeed in making themselves invisible.

Still, with a little more computing we could, in response to the question “Is it, or isn’t it?” answer in a rigorous way, “Ish.” Computers can handle composing dozens or hundreds of continuous criteria in ways our (conscious) brains cannot.

Wednesday, July 29, 2009

Boggle: measuring how sunshine alters the moon's orbit

The July 11, 2009 issue of New Scientist ran a series of stories of why the moon still matters to astronomy, in celebration of the thirtieth anniversary of the Apollo 11 landing. One of them, by Stuart Clark, described how reflectors left on the lunar surface are helping test Einstein's theory gravity. Laser pulses bounced off these reflectors allow researchers to measure the distance between the earth and the moon to a precision of a few millimetres.

At such accuracy one has to factor in the effect of solar radiation pressure "which pushes the moon's entire orbit from its calculated path by about 4 millimetres."

Monday, July 27, 2009

No opponents, but few advocates: Refugee resettlement is like ending hunger

In another installment of her fascinating series on a refugee family resettling in America, Mary Wiltenburg analyzes the big picture in What it’s like to be a refugee in America (Christian Science Monitor, July 19, 2009).

The policy challenge is strikingly similar to the one of ending poverty and hunger in the world. America is remarkably generous (The US, Canada and Australia last year took in 92% of the world's resetlled refugees), but the scope of the problem is tremendous: the US, for example, will resettle about 75,000 people, but 13.6 million others worldwide are living under or seeking UN protection. The American system is creaking: new arrivals received assistance for 24 months when the current system was installed thirty years ago in the Carter Administration, but that's down to a maximum of eight months today.

Wiltenburg's political analysis applies to hunger and poverty, too:

"Refugee resettlement is a tiny program in the grand scheme of Washington. It has no real opponents, but advocates all have higher priorities and the refugees themselves have no political clout. It’s widely agreed that the program’s funding is due for a radical increase [but] how any politician will weigh the moral and political costs against the financial one is still a question."

National security rationales are often used to lobby for international relief programs of all kinds, but the logic is usually tenuous. The true motivation is compassion and generosity, which is unfortunately antithetical to the competitive tussle over resources that is the essence of politics.

Wednesday, July 15, 2009

Factoid: America has < 5% of the world’s people but almost 25% of its prisoners

Source:"A nation of jailbirds," Lexington opinion column in the The Economist, 4th April 2009.
"It imprisons 756 people per 100,000 residents, a rate nearly five times the world average. About one in every 31 adults is either in prison or on parole. Black men have a one-in-three chance of being imprisoned at some point in their lives."
The first half of the story is a searing list of statistics on the brutality and ineffectiveness of the US prison system. But the point is that there's a politician who seems to have taken up this most unpopular of all issues in a democracy (after the rights of sex offenders): Sen. Jim Webb of Virginia.

Monday, June 15, 2009

The Radio Rain Gauge

It’s hard to manage what you don’t measure, and regulators have precious little data on radio operation. Information gathering by citizen enthusiasts could make a huge difference. Riffing on amateur weather stations, it’s easy to envisage a network of “radio rain gauges”.

The need

Information on radio operation is an important part of managing wireless systems. Individual licensees can do this reasonably well, but national regulators are flying blind. The case for better monitoring is clear. The 2002 FCC Spectrum Policy Task Force Report called for better information to more accurately characterize radio operations; Ofcom reviewed automatic monitoring systems in 2006 with a view to increasing concurrent operation where it is lacking, and policing unlawful operation. [1] There have been a number of influential measurement campaigns of radio operation with a view to measuring “spectrum utilization” (I use quotes because neither term is well defined) in recent years, notably Shared Spectrum and CRFS. [2] The case for measurement has received a fillip in the US recently via a bill in the Senate, the “Radio Spectrum Inventory Act” which requires the NTIA and FCC to “inventory” of each radio spectrum band they manage, from 300 Megahertz to 3.5 Gigahertz, every two years.

Precedents and models

Centralized measurement is limited by scope, budgets and politics. The obvious complement is a citizen network dedicated to continuous measurement, and there is a strong precedent: amateur weather stations. For example, someone in my neighborhood has put up a very impressive weather site. The Northwest Weather Network is an Internet based group of private weather stations in Washington, Oregon, Idaho, and Montana that aggregates weather information on its web site. The Citizen Weather Observer Program (CWOP) is a private-public partnership with over 8,000 members world-wide that collects weather information contributed by citizens, and makes it for weather services and homeland security.

There are also public/private collaborations in weather monitoring that could have analogs in the radio monitoring space. AWS Convergence Technologies, Inc. claims to have deployed 8,000 WeatherBug Tracking Stations and more than 1,000 cameras primarily based at neighborhood schools and public safety facilities across the U.S. It says WeatherBug started in the education market by pioneering a program which installed professional-grade weather stations at schools then networked them together; since 2002, WeatherBug’s application has come pre-installed on HP and Compaq computers, and Logitech peripherals.

Weather enthusiasts know about building and managing monitoring stations, and sharing data. Most of them probably don’t know a lot about radio – but the hams do. The American Amateur Radio League, for example, is a nonprofit with 156,000 members that promotes interest in amateur radio communications and experimentation. .

Building out the radio rain gauge network could provide new inspiration and motivation for both weather enthusiasts and radio amateurs.

This will only work if the equipment is cheap. Fortunately, the relentless improvement in computer technology means that mainstream PCs will soon be able to do sophisticated radio monitoring. The free GNU radio software toolkit [3] has been available since 1998, and one can buy the add-on hardware required (one still needs a radio tuner) from Ettus starting at $700 for a basic kit. Most hobbyist software radio projects focus on building receivers rather than transmitters, and a radio rain gauge is an RF spectrum analyzer by another name – which is a straightforward “Hello World” application that’s included in the most software-defined radio packages.

The Radio Rain Gauge Network

This leads to radio band observation running as a background task on thousands if not millions of personal computers. Screensavers have been using spare compute cycles for years to tackle tough academic and public-interest problems; for example, http://distributed.net/ started in April 1997, and SETI@Home was released to the public in May 1999. The infrastructure for collecting data from thousands of PCs is therefore already in place. [4]

A data aggregator will be an important component of such a network. It will also help to have standard formats and repositories so that anyone can get access to the data. This might be done by an academic institutions (cf. BOINC’s role in the cycle-scavenging screen saver endeavor), or it could be end-user driven like the Citizen Weather Observer Program.

Caveats

It’s easy to imagine unease in some quarters about such initiatives. National security operations would prefer that their waveforms and operations not be open to scrutiny and analysis (cf. sensitive areas blurred out in Google Maps), and radio regulators (prompted by nervous incumbent operators) may try to limit the operations of software-defined radios. However, once the knee-jerk negativity has died down, the value of the data collected and the energy of enthusiasts will carry the day. Further, receive-only installations (like radio rain gauges, i.e. spectrum analyzers) are less scary because they do not transmit a signal that could interfere with other operations.

It is also true that the interpretation of the data gathered by radio rain gauges is tricky. Two frequently cited metrics are efficiency and occupancy. The FCC’s Spectrum Efficiency Working Group concluded in 2002 that “it is not possible, nor appropriate, to select a single, objective metric that could be used to compare efficiencies across different radio services” because “the difficulty in calculating some of these variables (for example, the capacity and number of users), and the assumptions behind these calculations, make measures of spectrum efficiency highly unreliable.” John MacDonald, reporting on a 2007 survey of spectrum utilization in Chicago, concluded that “there is a need for a better metric of spectrum utilization than spectrum occupancy which can lead to [erroneous] conclusions as [to] the availability of free spectrum.” Finally, a measure of the intensity of radio operation is not the same as its productivity. One desirable metric would be the aggregate value of information transmitted; however, it is impossible to compute it since there are so many incommensurable measures of value. In the end, though, the current lack of real-time, widely gathered data is so profound that any new information will improve decision making.

Finally, radio frequency engineering is a black art. Building and calibrating the “RF front end” that connects the digital world of the computer to the analog world of radio signals requires expertise that lies outside the software realm, and high sensitivity receivers with low noise and an ability to handle both strong and weak signals will be expensive. A class of “pro” equipment that consumer devices cannot match will remain.

References

[1] National regulators on the need for monitoring. In both cases, regulators framed the problem using the spectrum metaphor as measuring “spectrum use”. FCC Spectrum Policy Task Force (2002): Task Force Report; Report of the Spectrum Efficiency Working Group. Ofcom reports on automatic monitoring systems: Phase I (July 2006), Phase II (Dec 2006)

[2] Measurements. Spectrum occupancy measurements from January 2004 until August 2005 done by Shared Spectrum Corporation for the National Science Foundation (NSF) under subcontract to the University of Kansas. Ofcom (2009) “Capture of Spectrum Utilisation Information Using Moving Vehicles,” Report by CRFS, 30th March 2009.

[3] GNU Radio: introductory article by Eric Blossom ; Wired story; Wikipedia; GNU Radio project documentation, more.

[4] Worldwide distributed computing: List of projects; survey article in Science.

Saturday, May 23, 2009

Factoid: 78% of bottom-quartile employees don't have employer-provided health coverage

-- From a McKinsey analysis reported in their Chart Focus Newsletter, May 2009. It reports "growing disparities in the percentage of employees at different income levels receiving employer-paid health benefits: only 22 percent of employees in the lowest income group (earning an average of $14,800 a year), but 56 percent, 81 percent, and 89 percent of those in the lower-middle, upper-middle, and top income groups, respectively."

Monday, May 18, 2009

Factoid: more Americans are killed every year by preventable medical mishaps than breast cancer or AIDS

From an Economist special report on health care and technology, April 18th, 2009

A report by the Institute of Medicine estimated that up to 100,000 Americans are killed each year by preventable mishaps such as wrong-side surgery, medication errors and hospital-acquired infections—a larger number than die from breast cancer or AIDS.

Flying Blind, The Economist, April 16th 2009

The story continues:

Sometimes such errors can be prevented without fancy technology. It helps to write “not this leg” on a patient’s left leg before surgery on his right leg. When Kaiser Permanente’s innovation laboratory looked into errors in medication dosage, it found that a lot of them were due to interruptions. Now nurses preparing complex medications wear “do not disturb” sashes, which has caused errors to drop noticeably. A striking study in the New England Journal of Medicine showed that surgical errors and complications fall by one-third if hospitals use a simple safety checklist before, during and after surgery.

Tuesday, May 12, 2009

Protection Payments: Licensed vs. unlicensed radio rights

I have just realized the blindingly obvious: the main value of a radio license is the right not to be interfered with, rather than the right to transmit.

This claim should be testable by establishing the degree to which the protection against interference influences the prices of licenses sold at auction. I’m working with Johnny Chan at the University of Washington to generate some results in this area. It’s certainly true anecdotally; M2Z has argued that T-Mobile knew its AWS-3 license, which had an adjacent band which would generate more interference, was worth less and so paid less at auction.

Proponents of license auctions charge that unlicensed allocations mean that “people” (they’re thinking of Google and Microsoft) are getting something without paying for it. It’s true that users of unlicensed radios don’t pay for a license, and it’s also true that both kinds of licenses confer some permission to operate a radio.

But there’s a big difference: a licensee can stop others from interfering with their operation, whereas an unlicensed user not only may not interfere with licensees, but also has to accept interference from all comers.

The big difference between an exclusive-use license and an unlicensed regime is excludability rather than autonomy, to use terminology I defined in an earlier post (Protecting receivers vs. authorizing transmitters). (Regarding a property, exclusivity means an owner can control what other people do, while autonomy allows an owner to act without hindrance.)

However, since radio licenses are defined in terms of transmission rights rather than receiver protections, what’s being sold is autonomy rather than exclusivity.

In practice there is a gamut of license types, with increasingly strong excludability rights: from unlicensed, to licensed by rule, to secondary licenses, and then primary licenses. The more excludability you get, the more a license should be worth. We’re planning to do regression analysis on US auction results to see if this is the case.

If, as we’re working to show, the main benefit of a radio license is protection from interference rather than the right to transmit, then current radio policy is misconceived in focusing on transmit rights rather than receiver protection rights. While it’s true that defining transmit rights implicitly defines the receiver rights (again, see Protecting receivers vs. authorizing transmitters), not making receiver rights explicit guarantees downstream conflict, as the M2Z/T-Mobile argument over AWS-3 has shown.

Monday, May 04, 2009

A view on the policy making stream

BusinessWeek writes that IBM is pushing “stream computing”, which is processing incoming information on the fly rather than putting it in a database first, and then mining it.

I’ve been trying to mine information in the FCC’s Electronic Comment Filing System (ECFS), the repository for all interactions that petitioners have with the agency. The BW story got me thinking what one could do if updates to ECFS were easily accessible on the fly, along the lines of the proposal by Ed Felten and colleagues that government should expose underlying data rather than creating portals.

One could do a lot with just the metadata, that is, cover information on who submitted a document to ECFS. A little extra processing to, say, extract information from the filed documents on all the people present in a meeting, would add a great deal of value. One could also extract information about what topics are being discussed by doing semantic analysis of comments and reports of meetings between petitioners and the agency.

Some things researchers (not to mention commercial information providers) could do with this kind of intelligence:

Track the ebb and flow of meetings related to a particular proceeding

Be notified when a specific company, company in a coalition, etc. reports a meeting with the agency, and see it in the context of other meetings by opponents and allies

Put a watch on the meetings in a particular bureau of the agency

Track the personalities – who’s meeting with whom, who hasn’t been seen lately, who seems to be a rising star. I’ve been told that John de Figuieredo predicted the importance of William Kennard before he was tapped for the FCC by noticing that he was in a lot of key lobbying meetings. (Caveat: I may have misremembered the characters in this anecdote. Please correct me if you know better...)

Given time series information one could develop leading indicators for when a proceeding was heating up, or when something big was brewing.

Of course, the Garbage In, Garbage Out Rule applies; if petitioners file late, or misrepresent their interactions (i.e. lie), all the stream computing in the world will be for naught. We may need a suggestion I heard from Bob Pepper, a former FCC staffer now at Cisco: make petitioners warrant that their submissions are true, on penalty of perjury. Curiously, there is apparently no requirement for petitioners to tell the truth, and no penalties if they lie.