Tuesday, August 09, 2011

The dark side of whitespace databases

Back in May 2009 I drafted a blog about the unintended side-effects of regulating unlicensed radios using databases. I was in the thick of TV whitespace proceeding (on the side of the proponents), and decided not to post it since it might have muddied the waters for my client.

Databases have become the Great White Hope of “dynamic spectrum access” over the last two-plus years. They are seen not only as a way to compensate for the weaknesses of “spectrum sensing” solutions, but as a way for regulators to change the rules quickly, and for unlicensed devices to work together more efficiently. For a quick background, see: FCC names nine white-space database providers, FierceWireless, Jan 2011; Michael Calabrese, “Ending Spectrum Scarcity: Building on the TV Bands Database to Access Unused Public Airwaves,” New America Foundation, Wireless Future Working Paper #25 (June 2009).

Looking back at my note, I think it's still valid. Rather than rewrite it, I’ve decided simply to repost it here as originally drafted (omitting a couple of introductory paragraphs).



The benefits

I expect the database approach to be a big hit with regulators because it is a powerful technique that seems to solve pressing problems. The fairy dust of “cloud computing” and “smart radios” also helps. However, in practice it may prove to be rather less of a panacea than expected.

The attraction of the database approach boils down to control. If the regulator finds that a rule doesn’t work well – for example, the exclusion zone around some protected service is the wrong size – it can be changed literally overnight by changing how the database responds to requests for permission to operate. Depending on the information supplied to and/or provided by the database, a regulator could reduce a maximum allowed power level or deny permission for a device to operate. In the past, rule changes could take years to implement, because once devices were deployed, their behavior was fixed; rule changes could only apply to new devices, and they would only gradually replace the old ones.

Consequently, regulators don’t have to start with most conservative assumptions about operating parameters needed to protect incumbent services like TV. They can start with more reasonable values, and change them later if a problem appears.

The limitations

(i) Commercial uncertainty

This all sounds like good news for innovators, too, since the most restrictive assumptions might lead to rules so tough that usable devices could not be built. On the other hand, however, a regulator can easily change its mind about rules, which could cripple an industry overnight if the result is that devices are no longer usable.

(ii) Unequal burdens

Televisions can be adversely affected by bursts of electromagnetic radiation when home appliances switch on. However, these old devices can’t be controlled – but the smart new ones can. The new generation of smart devices will paradoxically bear a heavier regulatory burden than the old, dumb ones. Before the days of fine-grained control promised by database lookup, all participants had to accept that protection against interference isn’t absolute; there has always been spurious interference from electrical appliances switching on and off, but it’s been ignored because there was no way of tracking and managing it. Striving for perfection in the protection afforded by databases will be a disincentive for device innovation.

(iii) Fragile complexity

A suitable database would also allow a regulator to prescribe device behavior for an unlimited number of edge cases. Since it’s simply a “small matter of programming”, there’s no limit to the sophistication of the possible rules. In practice, of course, there is a heavy price to pay for each increment in sophistication; and the price increases exponentially. As rules get more complicated, the possibility for unexpected rule conflicts increases, let alone the number of software bugs. The old rule of thumb in software design applies: “Just because you can, doesn’t mean you should.” (For more on applying the lessons of managing complex ecosystems to internet policy, see my "The Resilience Principles: A Framework for New ICT Governance," 9 J. on Telecomm. and High Tech. L. 137 PDF.)

Changes to database parameters will come with clarion calls for “improving spectrum efficiency”. However, efficient systems are fragile ones. The set of parameters that generates maximum output from an engineering or economic system usually places it at the edge of chaos; collapse is only a small step away. Changing the rules too often or in too many ways will lead to unexpected and probably undesired outcomes.

(iv) Temptation to meddle

Database solutions start a regulator down the slippery slope of detailed management of operations in a band. That’s OK for a licensee that can extract rents from their efforts, but goes against the hands-off philosophy being developed by regulators like the FCC in the US, and Ofcom in the UK. For a regulator looking for a light touch approach to unlicensed operation, overly sophisticated database techniques will be a tar baby.

Changing the parameters in a database amounts to a rule change. In some jurisdictions this increased autonomy for the agency could amount to changing rules without due process.

(v) Privacy

Another selling point is that a database regime gives regulators the power to prohibit the operation of devices – categories, certain models, or even individual ones – on the basis of identifiers that have to be provided before obtaining operating permission from the database. This is useful if some device(s) turn out to operate in unacceptable ways. However, this benefit has to be set against the risks of collecting detailed data. Information on where and how device classes are operating would allow a database operator to aggregate valuable competitive information, and database queries that require devices to provide unique serial numbers and their location allow fine grain tracking that has privacy implications.

Remedies

When a database is intended to support on-the-fly adjustments to operating rules (not the case with the proposed white space rules in the US, but it could happen in the UK), it will be important to establish principles to guide behavior before the database comes into operation.

Since adding complexity can lead to unexpected interactions between database rules, a riff on Einstein’s famous maxim applies: database rules should be as complicated as necessary, but no more so.

The regulator should give innovators the certainty they need to make upfront technology development investments by establishing floors and ceilings on parameter values.

When choosing parameters, robustness of operations should be taken into consideration in addition to efficiency of use of radio operating resources.

Operating parameters should be changed with care and after due process. The procedure for deciding whether and how to change parameters should be established in advance. Changes to database parameters should not be driven by anecdote except in egregious cases.

There should be a high hurdle to adding new rules, and the hurdle should get higher as more rules are added. Every new “feature” of a database regime, such as a clever way to protect against some newly discovered, rare interference case, carries a hidden cost of increased system fragility.

No comments: