In my post FCC white paper shows that “spectrum efficiency” is meaningless I argued that spectrum efficiency metrics are not very helpful.
They won’t go away, though, because engineers and economists instinctively characterize systems numerically. Both tribes strive to separate a problem into smaller independent parts, each described quantitatively; metrics are just a symptom. The goal is to convert a complex mess into a problem amenable to objective analysis, yielding an incontrovertible answer. No more messy politics!
Since politicians always look for cover behind engineers and economists, simplistic metrics will always be with us – not least in radio regulation. Given that reality, I’m going to dig into spectrum metrics a little more. I conclude that it could be more productive to define a series of axes in a parameter space than a single metric.
"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Friday, December 30, 2011
Tuesday, December 27, 2011
Stamps and Stewards: A third way to regulate radio operation
Radio operation to date has largely been regulated in two ways. The dominant approach has been licensing station operators, whether they’re amateurs, TV broadcasters, or companies operating cellular systems. In the last twenty years or so, device licensing (aka unlicensed in the US, and license exemption in Europe) has also become widely used: if a device has been certified to meet regulatory requirements, anyone can operate a “station” using it without needing a license. [1] In these two approaches, the regulation controls either the system operator (for licensed), or the device manufacturer (for unlicensed).
I’m exploring another way, where the regulator accredits a limited number of “stampholders” who can each an issue an unlimited number of “stamps.” One can see these stampholders as the designated stewards of a "spectrum commons," and the stamps as the mechanism they use for controlling access to a common pool resource. A device may only be sold if it bears the requisite stamp or seal, in addition to any other statutory requirements such as Part 15 certification. Control is exercised at the point of sale through labeling or marks.
This notes builds on the previous posts Licensing radio receivers (Aug 2011) and Licensed Unlicensed (Sep 2011). I learned long ago that if I can think of something, someone’s already done it. However, I haven’t found good precedents yet, and I’m still looking for canonical examples or ringing metaphors. Stamps (in the sense of signet rings and seals) and Stewards is the best analogy I’ve found so far. [2]
Follow-up: In Markets for adjusting interference rights (May 2012) I explore another way of negotiating adjustments to boundaries (e.g. power levels) between unlicensed bands and their neighboring bands given of the collective action challenges faced by unlicensed operators.
Monday, December 05, 2011
Spectrum utilization and a Buddhist perspective on space
The “Spectrum as Space” metaphor implies that spectrum is a neutral container that can be filled with radio signals, leading to naïve notions of utilization such as empty and full spectrum bands. “Spectrum” is imagined a collection of axes which mark out an abstract space, such as frequency, geography, and time (e.g. Robert Matheson’s “electrospace” concept, cf. Matheson & Morris 2011, The Technical Basis for Spectrum Rights: Policies to Enhance Market Efficiency).
However, that’s not the only way to look at it. Non-spatial models such “Wireless as Trademark” work just as well (see my 2008 paper De-Situating Spectrum: Rethinking Radio Policy Using Non-Spatial Metaphors): by analogy, a trademark stands for both a part of the wireless resource (customarily, frequency band x geographic region x time slot), and signals. The wireless resource is all possible radio operations. In such an approach, one is much less likely to ignore the importance of receivers in this approach than spectrum-as-space, where only transmitters can “fill the space” with signals. Any radio operation includes the use of a receiver, and that receiver-transmitter pair influences what transmissions are possible by third parties.
Curiously, I found a relevant perspective on this problem in a book on ethics – Stephen Batchelor’s Living with the Devil. He writes:
The customary view that Batchelor outlines is “space as a set of dimensions” that informs the Spectrum as Space metaphor. One can transpose his summary to spectrum as “the relatively permanent place where [radio operations] happen.” The “Buddhist” view, on the other hand, would see spectrum as the absence of factors that would obstruct radio operations. Existing radio operations, including receivers, would provide resistance to new operations, even in quite distant frequency bands. And there is an interaction between the agent that wants to move about and the nature of obstructions: neither a mouse nor a monkey would have no trouble scurrying around in a restaurant, while a person would be obstructed by all the tables and chairs. Likewise, one has to first define the new operation one has in mind before deciding that spectrum is “occupied”; calculating utilization is not a straightforward matter of marking spectrum as “empty” or “full.”
However, that’s not the only way to look at it. Non-spatial models such “Wireless as Trademark” work just as well (see my 2008 paper De-Situating Spectrum: Rethinking Radio Policy Using Non-Spatial Metaphors): by analogy, a trademark stands for both a part of the wireless resource (customarily, frequency band x geographic region x time slot), and signals. The wireless resource is all possible radio operations. In such an approach, one is much less likely to ignore the importance of receivers in this approach than spectrum-as-space, where only transmitters can “fill the space” with signals. Any radio operation includes the use of a receiver, and that receiver-transmitter pair influences what transmissions are possible by third parties.
Curiously, I found a relevant perspective on this problem in a book on ethics – Stephen Batchelor’s Living with the Devil. He writes:
One tends to think of space in terms of physical extension and location. A body “occupies” or “fills” a space. For there to be “no more space” means that nothing more can be fitted into a room or a vehicle or a document. Outer space is that virtually infinite expanse speckled with galaxies and stars separated by inconceivable distances. “Inner space” suggests a formless expanse of mind in which thoughts, mental images, memories, and fantasies rise and pass away. Space seems to be the relatively permanent place where temporal events happen.
Buddhist philosophers see space differently. They define it as the “absence of resistance.” The space in a room is under stood as the absence of anything that would prevent one moving around in it. To cross from one side of the room to the other is possible because nothing gets in your way. Rather than being the place where things happen, space is the absence of what prevents things from happening. The space in the room is nothing in itself; it is just the absence of chairs or tables, glass walls or hidden tripwires that would obstruct movement within it. In encountering no such resistance, we are able to move about freely. [In the footnotes, Batchelor ascribes this approach to the Geluk school of Tibetan Buddhism.]
The customary view that Batchelor outlines is “space as a set of dimensions” that informs the Spectrum as Space metaphor. One can transpose his summary to spectrum as “the relatively permanent place where [radio operations] happen.” The “Buddhist” view, on the other hand, would see spectrum as the absence of factors that would obstruct radio operations. Existing radio operations, including receivers, would provide resistance to new operations, even in quite distant frequency bands. And there is an interaction between the agent that wants to move about and the nature of obstructions: neither a mouse nor a monkey would have no trouble scurrying around in a restaurant, while a person would be obstructed by all the tables and chairs. Likewise, one has to first define the new operation one has in mind before deciding that spectrum is “occupied”; calculating utilization is not a straightforward matter of marking spectrum as “empty” or “full.”
Sunday, October 23, 2011
FCC white paper shows that “spectrum efficiency” is meaningless
The FCC Technical Advisory Council’s (TAC) draft white paper on spectrum efficiency metrics (25 September 2011) is an excellent piece of work. It is authoritative, instructive, and demonstrates decisively that spectrum [1] efficiency metrics are a meaningless concept.
While they don’t say this in so many words, members of the Sharing Working Group perhaps intended this conclusion to be drawn; “spectrum efficiency” is a DC catchphrase that is hard to avoid, and it would probably be unwise to refute it overtly…
The following elements of the paper imply that the “spectrum efficiency” concept is useless:
- There is no metric that can be applied across the myriad of different wireless services.
- The metrics are incomplete, even within a service.
- While the paper suggests metrics for specific services, the taxonomy of services is arbitrary.
Consequently
- There is no way to compare the “efficiency” of one radio service (aka one “spectrum use”) to another, denying politicians the pseudo-scientific rationale they dream of for converting a frequency band allocation from one use to another.
- Even within a given service type, there is no defensible way to rate one deployment’s performance over another; even if one scored much lower using the relevant efficiency metric, its defenders could invoke any of the long list of “additional efficiency considerations” to deny that the comparison was valid.
The paper also misses an opportunity: It hints at the importance of cost effectiveness rather than mere efficiency, but doesn’t address this broader context.
Follow-up posts:
- From spectrum efficiency metrics to parameter spaces (December 2011)
- Three meanings of “spectrum efficiency” (October 2012)
Monday, October 17, 2011
The extent of FCC/NTIA frequency sharing
Take a guess: What percentage of US frequencies are controlled by the Federal government (represented by the NTIA), and what percentage is shared with non-Federal [1] users, who are under FCC jurisdiction? And what’s the remainder, devoted solely to non-Federal users?
My intuition, for what it's worth, was completely wrong. I thought the Fed/non-Fed split was roughly 50/50, with a bit (say 10%) being shared. As I pointed out in my recent post about partitioning Fed and non-Fed allocations, the amount of sharing should be easy enough to establish. It turns out that Peter Tenhula of Shared Spectrum Company has done a lot of work on this [2], and he pointed me to the FCC’s spectrum dashboard where one can download an XML snapshot of the allocation database (currently the API only covers the range 225-3700 MHz).
The answer? It depends on the frequency range and how one counts [3], but very roughly 10% is Federal, 40% shared, and 50% non-Federal (i.e. FCC) only.
Here's a picture (click it to enlarge); an Excel file with my analysis is here.
Update, 21 October 2011: Perhaps the flaw didn't lie with my intuition, but with my interpretation of the data. A senior FCC person has pointed out to me that many supposedly "shared" allocations are, to all intents and purposes, controlled by Federal agencies, and non-Federal (i.e. FCC-managed) services are present only on sufferance, if at all (e.g. 220-2290 MHz); or "sharing" only occurs both Federal and non-Federal entities used the same service (e.g. air traffic or maritime radar). So the question still stands, pending further digging: how much sharing (for various values of "sharing") is really going on?
Monday, October 10, 2011
Partition, not sharing: An alternative approach to the Fed/non-Fed spectrum divide
South Sudan. Serbia/Kosovo. India/Pakistan. Britney Spears and Kevin Federline. Sometimes a clean break is best for everyone, particularly when there are fundamental differences in mindset. Enforced coexistence is not, for many couples, the best way to live.
Sharing between Federal and non-Federal wireless users (aka Fed and non-Fed) is a favored way to realize the FCC’s dream of finding 500 MHz for commercial mobile broadband services; as reported in TheHill.com, “It is unclear where the 500 megahertz of spectrum will come from, but a large portion will likely come from government agencies that do not use the frequencies efficiently.”
Fed/non-Fed sharing can be made to work, and worthy efforts are being made. However, I doubt it’s worth the effort, given the insane difficulty of negotiating band re-allocations, let alone sharing agreements; questions over whether 500 MHz is, in fact, either needed or would make a dent on cellular companies’ problems; and fundamental concerns about jurisdiction (see my August 2011 post No Common Authority: Why spectrum sharing across the Fed/non-Fed boundary is a bad idea).
It would be a better use of time and effort to go in the opposite direction: make the partition between Federal and non-Federal as clean as possible, and let each group of figure out sharing among its own constituents.
Sharing between Federal and non-Federal wireless users (aka Fed and non-Fed) is a favored way to realize the FCC’s dream of finding 500 MHz for commercial mobile broadband services; as reported in TheHill.com, “It is unclear where the 500 megahertz of spectrum will come from, but a large portion will likely come from government agencies that do not use the frequencies efficiently.”
Fed/non-Fed sharing can be made to work, and worthy efforts are being made. However, I doubt it’s worth the effort, given the insane difficulty of negotiating band re-allocations, let alone sharing agreements; questions over whether 500 MHz is, in fact, either needed or would make a dent on cellular companies’ problems; and fundamental concerns about jurisdiction (see my August 2011 post No Common Authority: Why spectrum sharing across the Fed/non-Fed boundary is a bad idea).
It would be a better use of time and effort to go in the opposite direction: make the partition between Federal and non-Federal as clean as possible, and let each group of figure out sharing among its own constituents.
Thursday, September 29, 2011
Licensed Unlicensed: Having your Coase, and your Commons too
I lighted on the notion of issuing a handful of receiver licenses in allocations where transmitter licensees don’t control receivers (e.g. TV, GPS) to facilitate negotiations between operators in neighboring bands; details blogged here.
The same idea could be applied to unlicensed allocations, where the unbounded number of operators makes it essentially impossible for Coasian adjustments to be made: a neighbor that would like quieter unlicensed devices has nobody to make a deal with, nor do unlicensed users have an effective way to band together to make a deal if they’d like to increase their own transmit power. This approach also has the benefit, as in the receiver license case, of giving the regulator a tool for changing operating expectations over time, e.g. ratcheting down receiver protections or increasing receiver standards.
The catch-phrase “licensed unlicensed” is obviously a contradiction in terms; it’s shorthand for a regime where non-exclusive operating permissions are issued to a limited number of entities, while retaining the key characteristic that has made unlicensed successful: the ability of end users to choose for themselves what equipment to buy and deploy. These entities can use or sub-license these authorizations to build and/or sell devices to end-users.
Follow-up post
The same idea could be applied to unlicensed allocations, where the unbounded number of operators makes it essentially impossible for Coasian adjustments to be made: a neighbor that would like quieter unlicensed devices has nobody to make a deal with, nor do unlicensed users have an effective way to band together to make a deal if they’d like to increase their own transmit power. This approach also has the benefit, as in the receiver license case, of giving the regulator a tool for changing operating expectations over time, e.g. ratcheting down receiver protections or increasing receiver standards.
The catch-phrase “licensed unlicensed” is obviously a contradiction in terms; it’s shorthand for a regime where non-exclusive operating permissions are issued to a limited number of entities, while retaining the key characteristic that has made unlicensed successful: the ability of end users to choose for themselves what equipment to buy and deploy. These entities can use or sub-license these authorizations to build and/or sell devices to end-users.
Follow-up post
Friday, September 02, 2011
TV white space databases: A bad idea for developing countries
Now that TV white space rulemakings are in the can in the US and UK, proponents will be pitching the technology to any government that’ll listen, e.g. at the Internet Governance Forum meeting to be held in Nairobi on 27-30 September.
It’s understandable: the more widespread white space database rules, the larger device volumes will be, and thus the lower the equipment cost, leading to wider adoption – a positive feedback loop. However, white space database technology is unnecessary in many countries, particularly developing ones.
Yet it verges on dodgy ethics for companies to hype this technology to countries that don’t need it, particularly since there’s a better solution: dedicating part of the TV frequencies that are freed as a result of the transition to digital TV (the “Digital Dividend”) to unlicensed operation, without the white space bells and whistles.
It’s understandable: the more widespread white space database rules, the larger device volumes will be, and thus the lower the equipment cost, leading to wider adoption – a positive feedback loop. However, white space database technology is unnecessary in many countries, particularly developing ones.
Yet it verges on dodgy ethics for companies to hype this technology to countries that don’t need it, particularly since there’s a better solution: dedicating part of the TV frequencies that are freed as a result of the transition to digital TV (the “Digital Dividend”) to unlicensed operation, without the white space bells and whistles.
Monday, August 29, 2011
Spectrum “sharing”: the convenient ambiguity of an English verb
I realized while writing Spectrum Sharing: Not really sharing, and not just spectrum that my confusion over the meaning of spectrum sharing derives from two meanings of the English verb "to share":
For example, the first is sharing a bag of peanuts, and the second is sharing a kitchen or an MP3 file. Cellular operators and economists tend to use the word with the first meaning, and Open Spectrum advocates with the second.
But that raises the question: is the double meaning inherent in the concept, or is it just an accident of English vocabulary?
I asked some friends about the regulatory terminology in other languages; so far I have information about Arabic, Chinese and German. If you could shed light on regulatory terminology in other languages, for example French, Japanese or Spanish, please get in touch.
(1) to divide and distribute in shares, to apportion;
(2) to use, experience or occupy with others, to have in common.
For example, the first is sharing a bag of peanuts, and the second is sharing a kitchen or an MP3 file. Cellular operators and economists tend to use the word with the first meaning, and Open Spectrum advocates with the second.
But that raises the question: is the double meaning inherent in the concept, or is it just an accident of English vocabulary?
I asked some friends about the regulatory terminology in other languages; so far I have information about Arabic, Chinese and German. If you could shed light on regulatory terminology in other languages, for example French, Japanese or Spanish, please get in touch.
Tuesday, August 23, 2011
Time limiting unlicensed authorizations
One could get around the problem and still have unlicensed use, though, by time limiting the unlicensed authorization. [1] Just like build-out conditions on licenses, there would be a fixed time window within which widespread deployment should occur. If it doesn’t, the authorization is revoked.
This approach seems particularly relevant when an authorization holds great promise, but that promise is very uncertain, e.g. when the technology or the market is changing rapidly. “Sunsets” on rules are important since the passage of time invariably invalidates the premises of regulation, even as it entrenches the interests that coalesce around those regulations. [2]
Wednesday, August 17, 2011
Licensing radio receivers as a way to facilitate negotiation about interference
It’s a curious fact that, while receivers are just as much responsible for breakdowns in radio operations as transmitters [a], regulation is aimed pretty much exclusively at transmitters [b].
Since one can’t ignore the receivers in practice, arguments over interference almost invariably turn to receiver standards. Even if receiver standards were a good idea (and I don’t think they are - see my post Receiver protection limits: a better way to manage interference than receiver standards), the ability to adjust receiver performance by fiat or negotiation is limited when receivers are operated independently of transmitters.
I suspect that receiver licenses may be necessary to reach the optimum outcome in at least some cases. This post is going to take that idea out for a first test drive.
Regulators evidently have managed without receiver licenses (beyond their use as a way to fund traditional broadcasting) so far. Why introduce them now? I’ll give my usual answer: the dramatically increased demand for wireless is squeezing radio operators of widely varying kinds together to an unprecedented extent, and we no longer have the luxury of the wide gaps that allowed regulators to ignore receiver performance, and ways of managing it.
Follow-up post
Since one can’t ignore the receivers in practice, arguments over interference almost invariably turn to receiver standards. Even if receiver standards were a good idea (and I don’t think they are - see my post Receiver protection limits: a better way to manage interference than receiver standards), the ability to adjust receiver performance by fiat or negotiation is limited when receivers are operated independently of transmitters.
I suspect that receiver licenses may be necessary to reach the optimum outcome in at least some cases. This post is going to take that idea out for a first test drive.
Regulators evidently have managed without receiver licenses (beyond their use as a way to fund traditional broadcasting) so far. Why introduce them now? I’ll give my usual answer: the dramatically increased demand for wireless is squeezing radio operators of widely varying kinds together to an unprecedented extent, and we no longer have the luxury of the wide gaps that allowed regulators to ignore receiver performance, and ways of managing it.
Follow-up post
Tuesday, August 09, 2011
The dark side of whitespace databases
Back in May 2009 I drafted a blog about the unintended side-effects of regulating unlicensed radios using databases. I was in the thick of TV whitespace proceeding (on the side of the proponents), and decided not to post it since it might have muddied the waters for my client.
Databases have become the Great White Hope of “dynamic spectrum access” over the last two-plus years. They are seen not only as a way to compensate for the weaknesses of “spectrum sensing” solutions, but as a way for regulators to change the rules quickly, and for unlicensed devices to work together more efficiently. For a quick background, see: FCC names nine white-space database providers, FierceWireless, Jan 2011; Michael Calabrese, “Ending Spectrum Scarcity: Building on the TV Bands Database to Access Unused Public Airwaves,” New America Foundation, Wireless Future Working Paper #25 (June 2009).
Looking back at my note, I think it's still valid. Rather than rewrite it, I’ve decided simply to repost it here as originally drafted (omitting a couple of introductory paragraphs).
Databases have become the Great White Hope of “dynamic spectrum access” over the last two-plus years. They are seen not only as a way to compensate for the weaknesses of “spectrum sensing” solutions, but as a way for regulators to change the rules quickly, and for unlicensed devices to work together more efficiently. For a quick background, see: FCC names nine white-space database providers, FierceWireless, Jan 2011; Michael Calabrese, “Ending Spectrum Scarcity: Building on the TV Bands Database to Access Unused Public Airwaves,” New America Foundation, Wireless Future Working Paper #25 (June 2009).
Looking back at my note, I think it's still valid. Rather than rewrite it, I’ve decided simply to repost it here as originally drafted (omitting a couple of introductory paragraphs).
Thursday, August 04, 2011
No Common Authority: Why spectrum sharing across the Fed/non-Fed boundary is a bad idea
The ISART conference this year was about sharing in the radar bands, in line with the Administration’s efforts to encourage frequency sharing between Federal and non-Federal (e.g. commercial and civilian) users (NTIA Fast Track Evaluation PDF, FCC proceeding ET docket 10-123).
While it’s true that the NTIA has studied the feasibility of reallocating Federal Government spectrum, or relocating Federal Government systems, the current political focus is on “spectrum sharing” (cf. my post Spectrum Sharing: Not really sharing, and not just spectrum) – and Federal/non-Federal sharing is the hardest possible problem.
Federal/non-Federal sharing is hard for many reasons, notably the chasm between the goals and incentives between the two groups, and thus a profound lack of trust. I’m going to focus here, though, on a seemingly technical but profound problem: the lack of a common authority that can resolve conflicts.
Follow-up post
While it’s true that the NTIA has studied the feasibility of reallocating Federal Government spectrum, or relocating Federal Government systems, the current political focus is on “spectrum sharing” (cf. my post Spectrum Sharing: Not really sharing, and not just spectrum) – and Federal/non-Federal sharing is the hardest possible problem.
Federal/non-Federal sharing is hard for many reasons, notably the chasm between the goals and incentives between the two groups, and thus a profound lack of trust. I’m going to focus here, though, on a seemingly technical but profound problem: the lack of a common authority that can resolve conflicts.
Follow-up post
Wednesday, August 03, 2011
Spectrum Sharing: Not really sharing, and not just spectrum
There was endless talk about spectrum sharing at ISART in Boulder last week. I’ve become increasingly confused about what those words mean, since wireless has been about more than one radio system is operating at the same time and place pretty much since the beginning.
For example, whitespace devices are said to share the UHF band with television, but the operating rules have been drawn up to ensure that whitespace devices never interfere with TV, i.e. never operate in the same place, channel and time. What’s “sharing” about that? The purpose of radio allocation from the start has been to avoid harmful interference between different radio operations, which has always been done by ensuring that two systems don’t operate in the same place, channel and time – such as two TV stations not interfering with each other.
It seems that the “new sharing” has three characteristics: (1) more boundaries (in geography, frequency and particularly time) than ever before; (2) the juxtaposition of different kinds of services that differ more from each other than they used to; and (3) sharing without central control. It’s a difference in degree, not in kind.
It’s not about sharing, since the goal is to avoid interference, i.e. to avoid sharing. It’s not about spectrum, i.e. radio frequencies, since non-interference is achieved not only by partitioning frequencies but also by dividing space, time, transmit power and the right to operate.
For example, whitespace devices are said to share the UHF band with television, but the operating rules have been drawn up to ensure that whitespace devices never interfere with TV, i.e. never operate in the same place, channel and time. What’s “sharing” about that? The purpose of radio allocation from the start has been to avoid harmful interference between different radio operations, which has always been done by ensuring that two systems don’t operate in the same place, channel and time – such as two TV stations not interfering with each other.
It seems that the “new sharing” has three characteristics: (1) more boundaries (in geography, frequency and particularly time) than ever before; (2) the juxtaposition of different kinds of services that differ more from each other than they used to; and (3) sharing without central control. It’s a difference in degree, not in kind.
It’s not about sharing, since the goal is to avoid interference, i.e. to avoid sharing. It’s not about spectrum, i.e. radio frequencies, since non-interference is achieved not only by partitioning frequencies but also by dividing space, time, transmit power and the right to operate.
Sunday, June 26, 2011
The LightSquared Mess Shouldn’t Count Against Coase
It seems there’s a new meme floating around DC: I’ve been asked from both sides of the spectrum rights polemic whether the Lightsquared/GPS situation proves that Coasian make-spectrum-property advocates are crazy because the rights seem to be pretty well defined in this case, and yet the argument drags on at the FCC rather than being resolved through market deals. I suspect the source is Harold Feld’s blog My Insanely Long Field Guide to Lightsquared v. The GPS Guys where he says:
The “Coasian” position does have its problems (see below), but this isn’t an example of one of them. I think Harold’s premise is incorrect: the rights are NOT well-defined. While LightSquared’s transmission rights are clear, GPS’s right to protection – or equivalently, LightSquared’s obligation to protect GPS receivers from its transmissions – is entirely unclear. There’s no objective, predictable definition of the protection that’s required, just a vague generalities, built into statute (see e.g. Mike Marcus’s Harmful Interference: The Definitional Challenge).
LightSquared’s transmission permissions are in some sense meaningless, since “avoiding harmful interference” will always trump whatever transmit right they have, and there’s no way to know in advance what will constitute harmful interference. I believe that’s a fundamental problem with almost all radio rights definitions to date, and why I’ve proposed the Three Ps.
The “Coasian” position’s real important problems are on view elsewhere:
(1) While negotiation between cellular operators to shift cell boundaries show that transactions can succeed in special cases, there is no evidence yet that transaction costs for disputes between different kinds of service will be low, and thus that negotiations will succeed in the general case. Even if one can ensure that rights are well defined, it may prove politically impossible to reduce the number of negotiating parties to manageable levels since radio licenses are a cheap way for the government to distribute largesse to interest groups. This is most obvious in the case of unlicensed operation, but many licensed services such as public safety and rural communications also result in a myriad of licensees.
(2) The FCC’s ability and proclivity to jump in and change operating rules (i.e. licensees rights) in the middle of the game makes regulatory lobbying more efficient than market negotiation. This may be unavoidable given law and precedent. There is no way for today’s Commission to bind tomorrow’s Commission to a path of action; legislation is the only way to do that, and even statute is subject to change.
(3) A significant chunk of radio services aren’t amenable to market forces since they’re operated by government agencies that can’t put a monetary value on their operations, and/or can’t take money in exchange for adjusted rights. Nobody is willing to quantify the cost of a slightly increased risk that an emergency responder won’t be able complete a call, or that a radar system won’t see a missile, even if those systems have a non-zero failure rate to begin with. And even if the Defense Department were willing to do a deal with a cellular company to enable cellular service somewhere, it can’t take the Cellco’s money; the dollars would flow to the Treasury, so there’s absolutely no incentive for the DoD (let alone the people who work for it) to come to some arrangement.
For a spectrum wonk such as myself, it simply does not get better than this. I also get one more real world example where I say to all the “property is the answer to everything” guys: “Ha! You think property is so hot? The rights are clearly defined here. Where’s your precious Coasian solution now, smart guys?”
The “Coasian” position does have its problems (see below), but this isn’t an example of one of them. I think Harold’s premise is incorrect: the rights are NOT well-defined. While LightSquared’s transmission rights are clear, GPS’s right to protection – or equivalently, LightSquared’s obligation to protect GPS receivers from its transmissions – is entirely unclear. There’s no objective, predictable definition of the protection that’s required, just a vague generalities, built into statute (see e.g. Mike Marcus’s Harmful Interference: The Definitional Challenge).
LightSquared’s transmission permissions are in some sense meaningless, since “avoiding harmful interference” will always trump whatever transmit right they have, and there’s no way to know in advance what will constitute harmful interference. I believe that’s a fundamental problem with almost all radio rights definitions to date, and why I’ve proposed the Three Ps.
The “Coasian” position’s real important problems are on view elsewhere:
(1) While negotiation between cellular operators to shift cell boundaries show that transactions can succeed in special cases, there is no evidence yet that transaction costs for disputes between different kinds of service will be low, and thus that negotiations will succeed in the general case. Even if one can ensure that rights are well defined, it may prove politically impossible to reduce the number of negotiating parties to manageable levels since radio licenses are a cheap way for the government to distribute largesse to interest groups. This is most obvious in the case of unlicensed operation, but many licensed services such as public safety and rural communications also result in a myriad of licensees.
(2) The FCC’s ability and proclivity to jump in and change operating rules (i.e. licensees rights) in the middle of the game makes regulatory lobbying more efficient than market negotiation. This may be unavoidable given law and precedent. There is no way for today’s Commission to bind tomorrow’s Commission to a path of action; legislation is the only way to do that, and even statute is subject to change.
(3) A significant chunk of radio services aren’t amenable to market forces since they’re operated by government agencies that can’t put a monetary value on their operations, and/or can’t take money in exchange for adjusted rights. Nobody is willing to quantify the cost of a slightly increased risk that an emergency responder won’t be able complete a call, or that a radar system won’t see a missile, even if those systems have a non-zero failure rate to begin with. And even if the Defense Department were willing to do a deal with a cellular company to enable cellular service somewhere, it can’t take the Cellco’s money; the dollars would flow to the Treasury, so there’s absolutely no incentive for the DoD (let alone the people who work for it) to come to some arrangement.
Wednesday, June 22, 2011
Protection Limits are not "Interference Temperature Redux"
My post Receiver Protection Limits may have left the impression that reception protection limits are similar to the dreaded and ill-fated interference temperature notion introduced in 2002 by the FCC’s Spectrum Policy Task Force.
Receiver protections are part of the "Three Ps" approach (Probabilistic reception Protections and transmission Permissions - see e.g. the earlier post How I Learned to Stop Worrying and Love Interference, or the full paper on SSRN). While both the Three P and Interference Temperatur approaches share a desire to “shift the current method for assessing interference which is based on transmitter operations, to an approach that is based on the actual radiofrequency (RF) environment,” to quote from the first paragraph of the Interference Temperature NOI and NPRM (ET Docket No. 03-237), the Three Ps approach differs from Interference Temperature in four important ways:
1. The Three Ps focus on solving out-of-band, cross-channel interference, whereas Interference Temperature is concerned with in-band, co-channel operation
2. The Three Ps are used to define new operating rights, whereas Interference Temperature tried to open up opportunities for additional operations in frequencies allocated to existing licensees
3. The Three Ps do not grant second party rights, whereas Interference Temperature permits second party operation.
4. Three Ps rights are probabilistic, whereas Interference Temperature definitions are deterministic.
Receiver protections are part of the "Three Ps" approach (Probabilistic reception Protections and transmission Permissions - see e.g. the earlier post How I Learned to Stop Worrying and Love Interference, or the full paper on SSRN). While both the Three P and Interference Temperatur approaches share a desire to “shift the current method for assessing interference which is based on transmitter operations, to an approach that is based on the actual radiofrequency (RF) environment,” to quote from the first paragraph of the Interference Temperature NOI and NPRM (ET Docket No. 03-237), the Three Ps approach differs from Interference Temperature in four important ways:
1. The Three Ps focus on solving out-of-band, cross-channel interference, whereas Interference Temperature is concerned with in-band, co-channel operation
2. The Three Ps are used to define new operating rights, whereas Interference Temperature tried to open up opportunities for additional operations in frequencies allocated to existing licensees
3. The Three Ps do not grant second party rights, whereas Interference Temperature permits second party operation.
4. Three Ps rights are probabilistic, whereas Interference Temperature definitions are deterministic.
Receiver protection limits: Two Analogies
I argued in Receiver protection limits that there are better ways to manage poor receivers causing cross-channel interference problems than specifying receiver standards. Here are two analogies to sharpen one’s intuition for the most appropriate way to handle such situations.
Cities increase the salinity of rivers running through them, affecting downstream agriculture. However, the choices that farmers make determine the degree of harm; some crops are much more salt-tolerant than others. In order to ensure that farms bear their part of the burden, regulators have a choice: they can either regulate which crops may be grown downstream, or they can specify a ceiling on the salinity of the water leaving the city limits, leaving it up to farmers to decide whether to plant salt-tolerant crops, perform desalination, or move their business elsewhere. Limits on salinity protection are a less interventionist solution, and don’t require regulators to have a deep understanding of the interaction between salinity, crops and local geography.
Sound pollution is another analogy to radio operation. Let’s imagine that the state has an interest in the noise levels inside houses near a freeway. It can either provide detailed regulations prescribing building set-backs and comprehensive specifications on how houses should be sound-proofed, or it could ensure that the noise level at the freeway-residential boundary won’t exceed a certain limit, leaving it up to home-owners to decide where and how to build. Again, noise ceilings are a simple and generic regulatory approach that does not limit the freedom of citizens to live as they choose, and that does not require the regulator to keep pace with ever-evolving technologies to sound-proof buildings.
Cities increase the salinity of rivers running through them, affecting downstream agriculture. However, the choices that farmers make determine the degree of harm; some crops are much more salt-tolerant than others. In order to ensure that farms bear their part of the burden, regulators have a choice: they can either regulate which crops may be grown downstream, or they can specify a ceiling on the salinity of the water leaving the city limits, leaving it up to farmers to decide whether to plant salt-tolerant crops, perform desalination, or move their business elsewhere. Limits on salinity protection are a less interventionist solution, and don’t require regulators to have a deep understanding of the interaction between salinity, crops and local geography.
Sound pollution is another analogy to radio operation. Let’s imagine that the state has an interest in the noise levels inside houses near a freeway. It can either provide detailed regulations prescribing building set-backs and comprehensive specifications on how houses should be sound-proofed, or it could ensure that the noise level at the freeway-residential boundary won’t exceed a certain limit, leaving it up to home-owners to decide where and how to build. Again, noise ceilings are a simple and generic regulatory approach that does not limit the freedom of citizens to live as they choose, and that does not require the regulator to keep pace with ever-evolving technologies to sound-proof buildings.
Receiver protection limits: a better way to manage interference than receiver standards
Radio interference cannot simply be blamed on a transmitter; a service can also break down because a receiver should be able to, but does not, reject a signal transmitted on an adjacent channel.
More on this topic in subsequent posts:
The LightSquared vs. GPS bun fight is a good example of this “two to tango” situation. GPS receivers – some more so than others – are designed to receive energy way outside the allocated GPS bands which means that operation in the adjacent band due to a new service like LightSquared can cause satellite location services to fail. Without the LightSquared transmissions, there wouldn’t be a problem; but likewise, if GPS receivers were designed with the appropriate filters, they could reject the adjacent LightSquared transmissions while continuing to receive the satellite location signal and function normally. [1]
While the responsibility for interference is, in theory, shared between transmitters and receivers, radio regulation has traditionally placed the onus on a new transmitter to fix any problems that may arise. [2] As I will argue, receiver standards are an impractical response; limits on reception protection, formulated in terms of the RF environment rather than equipment performance, are preferable.
More on this topic in subsequent posts:
Receiver protection limits: Two Analogies (June 2011)
Protection Limits are not "Interference Temperature Redux" (June 2011)
The LightSquared Mess Shouldn’t Count Against Coase (June 2011)
Licensing radio receivers as a way to facilitate negotiation about interference (August 2011)
Incremental management of reception: When protection limits are not sufficient (February 2012)
Four Concerns about Interference Limits (May 2012)
Transmitter versus receiver specifications: measuring loudness versus understanding (July 2012)
Testimony: Harm Claim Thresholds (November 2012)
Receiver Interference Tolerance: The Tent Analogy (November 2012)
I have also written a two-page summary document, see http://sdrv.ms/ReceiverLimits.
The LightSquared vs. GPS bun fight is a good example of this “two to tango” situation. GPS receivers – some more so than others – are designed to receive energy way outside the allocated GPS bands which means that operation in the adjacent band due to a new service like LightSquared can cause satellite location services to fail. Without the LightSquared transmissions, there wouldn’t be a problem; but likewise, if GPS receivers were designed with the appropriate filters, they could reject the adjacent LightSquared transmissions while continuing to receive the satellite location signal and function normally. [1]
While the responsibility for interference is, in theory, shared between transmitters and receivers, radio regulation has traditionally placed the onus on a new transmitter to fix any problems that may arise. [2] As I will argue, receiver standards are an impractical response; limits on reception protection, formulated in terms of the RF environment rather than equipment performance, are preferable.
Tuesday, April 19, 2011
Too strategic to be true?
The cellular industry has been very vocal in calling on the FCC to allocate more spectrum licenses to satisfy the forecast demand for mobile data services. For two examples more or less at random, see this CTIA white paper, and the 4G Americas white paper “Sustaining the Mobile Miracle” (PDF).
On reflection, though, it strikes me as rather curious behavior for cut-throat competitors. More spectrum licenses won’t satisfy the insatiable demand for wireless data capacity any more than building highways reduces traffic congestion, and while it might make strategic sense, in the short term – and isn’t that all that really matters for listed companies, when all the rhetoric is said and done? – it means that the cellcos are giving up a wonderful opportunity to make money.
If the supply of spectrum licenses were fixed, and not increased by reallocation of other services to mobile wireless, then Economics 101 dictates that the price for wireless data would rise. (This is ignored in the forecasts; see e.g. my post Cisco’s Fascinating Flaky Forecast.) Operators wouldn’t incur the capital costs of lighting up new frequencies, and so their profits would rise – a lot!
On the other hand, if more cellular licenses were made available, the carriers would not only have to buy them at auction, but they would have to buy and install the infrastructure to use them. The price they could charge for wireless data service wouldn’t change much, and so their profits would go down, or at best stay flat.
All that said, though: these companies are much, much smarter business people than I am. I must be missing something. But what?
Perhaps this is all just a big CYA operation. When the inevitable demand crunch happens (with or without new cellular licenses, demand is set to outstrip supply), the operators will be able to blame the government: “Dear customer, it’s not our fault, we’ve been asking the government to help us give you the services you want, but they didn’t come through. We’re sorry, but all we can do to make sure that those who really need wireless services get them is to increase prices.”
On reflection, though, it strikes me as rather curious behavior for cut-throat competitors. More spectrum licenses won’t satisfy the insatiable demand for wireless data capacity any more than building highways reduces traffic congestion, and while it might make strategic sense, in the short term – and isn’t that all that really matters for listed companies, when all the rhetoric is said and done? – it means that the cellcos are giving up a wonderful opportunity to make money.
If the supply of spectrum licenses were fixed, and not increased by reallocation of other services to mobile wireless, then Economics 101 dictates that the price for wireless data would rise. (This is ignored in the forecasts; see e.g. my post Cisco’s Fascinating Flaky Forecast.) Operators wouldn’t incur the capital costs of lighting up new frequencies, and so their profits would rise – a lot!
On the other hand, if more cellular licenses were made available, the carriers would not only have to buy them at auction, but they would have to buy and install the infrastructure to use them. The price they could charge for wireless data service wouldn’t change much, and so their profits would go down, or at best stay flat.
All that said, though: these companies are much, much smarter business people than I am. I must be missing something. But what?
Perhaps this is all just a big CYA operation. When the inevitable demand crunch happens (with or without new cellular licenses, demand is set to outstrip supply), the operators will be able to blame the government: “Dear customer, it’s not our fault, we’ve been asking the government to help us give you the services you want, but they didn’t come through. We’re sorry, but all we can do to make sure that those who really need wireless services get them is to increase prices.”
Tuesday, March 01, 2011
“Quiet” doesn’t mean “unused”: The Downside of Under-defined Radio Rights
The FCC has promised to find and reallocate 500 MHz of radio frequencies to satisfy the burgeoning demand for high bandwidth mobile services such as video on cell phones. The idea, the hope, is that there are lots of unused bands to be repurposed. “Unused” is a tricky notion, though. I’ll take it to mean “radio quiet”: a radio energy detector doesn’t observe much if anything at certain frequencies, and the assumption is that a new service could transmit here.
Of course, nothing is as simple as that. Let’s assume that the services that actually operate in these quiet bands – and there are always incumbents, since every frequency has one if not many nominal users – can be found a new home, and that they’ll relocate. The harder problem is that a quiet band may not in fact be usable because of the equipment in neighboring bands. The LightSquared/GPS argument is a conveniently current example. The proposal to allow LightSquared to deploy lots of ground-based transmitters in a band where to date only satellite transmissions were allowed has caused shock and outrage among GPS users who claim that their receivers cannot distinguish between the LightSquared signal in the adjacent band and the satellite location signals in the GPS channel.
Since the FCC’s rules and precedents provide almost unlimited protection against "harmful interference" (a notoriously vague term) caused by new services, an incumbent is pretty much assured that it will be held harmless against any change. The situation is exacerbated because FCC licenses only specify transmission parameters and say nothing about the radio interference environment that receivers should be able to cope with. Radio receivers are thus designed and built as if their radio environment will never change; if a band has been quiet, none of the receivers in the adjacent frequencies can cope with more intensive use, since building in that protection costs money. (For complementary perspectives on this problem, and suggested remedies, see two short papers presented at a recent conference in Washington, DC: Kwerel and Williams, De Vries and Sieh.)
Thus, just because a band is quiet doesn’t mean that it’s unoccupied; it’s probably effectively occupied by the protection afforded to the cheap receivers next door that haven’t been required to, and therefore don’t, tolerate any substantial operation in the quiet channel. It’s as if the incumbent were a householder whose property used to be passed by track along which only ox wagons passed. She didn’t have to take any precaution against her dogs being run over by a wagon, such as building a fence, and this unlimited protection still holds even when the track is turned into an arterial road, holding passing vehicles completely responsible if a dog is run over.
Money could, but might not, solve the problem. Let’s say Tom Transmitter wants to deploy a new service
in the formerly quiet band, and that this would cost the incumbent neighbor, Rae Receiver, $300 million, either in lost revenue from diminished service and/or because of precautions such as new receiver filters that are needed to reject Tom’s adjacent band signals. If the benefit to Tom is big enough, if for example he could generate $500 million in profit, Tom could compensate Rae and still come out ahead. But how is the $200 million of potential gain ($500 million - $300 million) to be divided? This depends on Rae’s rights. If she has the right to prevent any operation by Tom (i.e. she can take out an injunction against him), she can demand essentially all his profits as a condition of operationlet’s say $499 million of his $500 million, whereas if she’s entitled to damages, she can only demand $300 million for actual losses. These are very different outcomes. Under an injunction, Tom’s incremental net profit is $1 million ($500 million - $499 million) and Rae’s is $199 million ($499 million - $300 million), whereas under damages, Tom’s net profit is $200 million and Rae’s is zero.
However, since the FCC doesn’t specify whether licenses are protected by damages or injunctions, Tom and Rae can’t begin to deal, since the legal basis of the negotiation is unclear. Tom will hope that he can get the whole $200 million incremental gain, and Rae will hope for it, too – a huge difference in expectations that will almost inevitably prevent the parties from coming to an agreement.
(There are further obstacles to reaching a settlement that I won’t go into here, such as uncertainty over what action by Tom actually constitutes damage to Rae due to ambiguity in the way FCC rules are currently formulated, and the freeloader/hold-out problems with negotiations involving many parties.)
What's to be done?
1. Any inventory of “unused” radio capacity should not only itemize radio quiet bands, but also the nature of the service and receivers next door, so that the cost of relocating, protecting or degrading the incumbent service can be estimated.
2. Any new licenses that are issued should specify whether they’re protected by injunctions or damages; this will facilitate negotiation.
3. Any new license should specify the receiver protection parameters the operator can rely on, and by implication what will not be protected.
4. Regulators should start retrofitting existing licenses to this new approach by specifying the remedy (#2) and laying out a timeline over which receiver protections (#3) will be dialed down from the current open-ended “no harmful interference” condition to more realistic and objective received energy levels.
The two page position paper I referenced above gives a quick introduction to these measures; for all the gory details, see the 15 page long version on SSRN.
Of course, nothing is as simple as that. Let’s assume that the services that actually operate in these quiet bands – and there are always incumbents, since every frequency has one if not many nominal users – can be found a new home, and that they’ll relocate. The harder problem is that a quiet band may not in fact be usable because of the equipment in neighboring bands. The LightSquared/GPS argument is a conveniently current example. The proposal to allow LightSquared to deploy lots of ground-based transmitters in a band where to date only satellite transmissions were allowed has caused shock and outrage among GPS users who claim that their receivers cannot distinguish between the LightSquared signal in the adjacent band and the satellite location signals in the GPS channel.
Since the FCC’s rules and precedents provide almost unlimited protection against "harmful interference" (a notoriously vague term) caused by new services, an incumbent is pretty much assured that it will be held harmless against any change. The situation is exacerbated because FCC licenses only specify transmission parameters and say nothing about the radio interference environment that receivers should be able to cope with. Radio receivers are thus designed and built as if their radio environment will never change; if a band has been quiet, none of the receivers in the adjacent frequencies can cope with more intensive use, since building in that protection costs money. (For complementary perspectives on this problem, and suggested remedies, see two short papers presented at a recent conference in Washington, DC: Kwerel and Williams, De Vries and Sieh.)
Thus, just because a band is quiet doesn’t mean that it’s unoccupied; it’s probably effectively occupied by the protection afforded to the cheap receivers next door that haven’t been required to, and therefore don’t, tolerate any substantial operation in the quiet channel. It’s as if the incumbent were a householder whose property used to be passed by track along which only ox wagons passed. She didn’t have to take any precaution against her dogs being run over by a wagon, such as building a fence, and this unlimited protection still holds even when the track is turned into an arterial road, holding passing vehicles completely responsible if a dog is run over.
Money could, but might not, solve the problem. Let’s say Tom Transmitter wants to deploy a new service
in the formerly quiet band, and that this would cost the incumbent neighbor, Rae Receiver, $300 million, either in lost revenue from diminished service and/or because of precautions such as new receiver filters that are needed to reject Tom’s adjacent band signals. If the benefit to Tom is big enough, if for example he could generate $500 million in profit, Tom could compensate Rae and still come out ahead. But how is the $200 million of potential gain ($500 million - $300 million) to be divided? This depends on Rae’s rights. If she has the right to prevent any operation by Tom (i.e. she can take out an injunction against him), she can demand essentially all his profits as a condition of operationlet’s say $499 million of his $500 million, whereas if she’s entitled to damages, she can only demand $300 million for actual losses. These are very different outcomes. Under an injunction, Tom’s incremental net profit is $1 million ($500 million - $499 million) and Rae’s is $199 million ($499 million - $300 million), whereas under damages, Tom’s net profit is $200 million and Rae’s is zero.
However, since the FCC doesn’t specify whether licenses are protected by damages or injunctions, Tom and Rae can’t begin to deal, since the legal basis of the negotiation is unclear. Tom will hope that he can get the whole $200 million incremental gain, and Rae will hope for it, too – a huge difference in expectations that will almost inevitably prevent the parties from coming to an agreement.
(There are further obstacles to reaching a settlement that I won’t go into here, such as uncertainty over what action by Tom actually constitutes damage to Rae due to ambiguity in the way FCC rules are currently formulated, and the freeloader/hold-out problems with negotiations involving many parties.)
What's to be done?
1. Any inventory of “unused” radio capacity should not only itemize radio quiet bands, but also the nature of the service and receivers next door, so that the cost of relocating, protecting or degrading the incumbent service can be estimated.
2. Any new licenses that are issued should specify whether they’re protected by injunctions or damages; this will facilitate negotiation.
3. Any new license should specify the receiver protection parameters the operator can rely on, and by implication what will not be protected.
4. Regulators should start retrofitting existing licenses to this new approach by specifying the remedy (#2) and laying out a timeline over which receiver protections (#3) will be dialed down from the current open-ended “no harmful interference” condition to more realistic and objective received energy levels.
The two page position paper I referenced above gives a quick introduction to these measures; for all the gory details, see the 15 page long version on SSRN.
Monday, February 21, 2011
Juggling Pipes: orchestrating scarce radio resources to serve multifarious applications
I concluded in Cisco’s Fascinating Flaky Forecast that the impending supply/demand mismatch in wireless data services presents opportunities for “innovations that improve effective throughput and the user experience”. This post explains one example: a software layer that that matches up various applications on a device to the most appropriate connectivity option available, mixing and matching apps to pipes to make the cheapest, fastest, or most energy efficient connection. (In academic terms, it’s a version of Joe Mitola’s Cognitive Radio vision.)
Peter Haynes recently prompted me to ask some experts what they thought the most exciting wireless technology developments were likely to be in the next decade. Mostly the answer was More of The Same; a lot of work still has to be done to realize Mitola’s vision. The most striking response was from Milind Buddhikot at Bell Labs, who suggested that the wireless network as we know it today will disappear into a datacenter by 2020, which I take to mean that network elements will be virtualized.
I don’t know about the data center, but from a device perspective it reminded me of something that’s been clear for some time: as a device’s connectivity options keep growing, from a single wired network jack to include one or more cellular data connections, Wi-Fi, Bluetooth, UWB, ZigBee etc., as the diversity of applications and their needs keeps growing, from an email client to many apps with different needs including asynchronous downloads, voice and video streams, and data uploads, and as choosing among becomes more complicated, such as trade-offs between connectivity price, speed, quality of the connection, and energy usage, there is a growing need for a layer that sits between all these components and orchestrates all these connections. Can you say “multi-sided market”?
The operating system is the obvious place to do such trade-offs. It sits between applications and peripherals, and already provides apps with abstractions of network connectivity. As far as I know, no OS provider has stepped up with a road map “smart connectivity.” It’s decidedly not just “smart radio” as we’ve heard about with “white spaces”; the white space radio is just one of the many resources that need to be coordinated.
For example, one Wi-Fi card should be virtualized as multiple pipes, one for every app that wants to use it. Conversely, a Wi-Fi card and a 3G modem could be bonded into a single pipe should an application need additional burst capacity. And the OS should be able to swap out the physical connection associated with a logical pipe without the app having to know about it, e.g. when one walks out of a Wi-Fi hotspot and needs to switch to wide-area connectivity; the mobile phone companies are already doing this with Wi-Fi, though I don’t know how well it’s working.
That said, the natural winner in this area isn’t clear. Microsoft should be the front-runner given its installed base on laptops, its deep relationships with silicon vendors, and its experience virtualizing hardware for the benefit of applications – but it doesn’t seem interested in this kind of innovation.
Google has an existential need to make connectivity to its servers as good as it could possibly be, and the success of Android in smartphones gives it a platform for shipping client code, and credibility in writing an OS. However, it is still early in developing expertise in managing an ecosystem of hardware vendors and app developers.
The network operators don’t much end-user software expertise, but they won’t allow themselves to be commoditized without a fight, as they would be if a user’s software could choose moment-to-moment between AT&T and Verizon’s connectivity offers. The telcos have experience building and deploying connectivity management layers through orgs like 3GPP. Something like this could be built on IMS, but it’s currently a network rather than device architecture. And the network operators are unlikely to deploy software that allows the user to roam to another provider’s data pipes.
The chipset and handset vendors are in a weaker position since they compete amongst themselves so much for access to telcos. Qualcomm seems to get it, as evidenced by their Gobi vision, which is several years old now: “With Gobi, the notebook computer becomes the unifying agent between the different high speed wireless networking technologies deployed around the world and that means freedom from having to locate hotspots, more choice in carrier networks, and, ultimately, freedom to Gobi where you want without fear of losing connectivity – your lifeline to your world.” As far as I can tell, though, it doesn’t go much beyond hardware and an API for supporting multiple 3G/4G service providers on one laptop. Handset vendors like
Vendors like Samsung or HTC could make a go of it, but since network operators are very unlikely to pick a single hardware vendor, they will only be able to get an ecosystem up to scale if they collaborate in developing a standard. It’s more likely that they will line up behind the software giants when Google and/or Microsoft come forward with their solutions.
It is also possible that Cisco (or more likely, a start-up it acquires) will drive this functionality from the network layer, competing with or complementing app/pipe multiplexing software on individual devices. As Preston Marshall has outlined for cognitive radio,* future networks will adapt to user needs and organize themselves to respond to traffic flow and quality of service needs, using policy engines and cross-layer adaptation to manage multiple network structures. There is a perpetual tussle for control between the edge of the network and the center; smart communications modules will be just another installment.
* See Table 4 in Preston F Marshall, “Extending the Reach of Cognitive Radio,” Proceedings of the IEEE, vol. 97 no. 4 p. 612, April 2009
Peter Haynes recently prompted me to ask some experts what they thought the most exciting wireless technology developments were likely to be in the next decade. Mostly the answer was More of The Same; a lot of work still has to be done to realize Mitola’s vision. The most striking response was from Milind Buddhikot at Bell Labs, who suggested that the wireless network as we know it today will disappear into a datacenter by 2020, which I take to mean that network elements will be virtualized.
I don’t know about the data center, but from a device perspective it reminded me of something that’s been clear for some time: as a device’s connectivity options keep growing, from a single wired network jack to include one or more cellular data connections, Wi-Fi, Bluetooth, UWB, ZigBee etc., as the diversity of applications and their needs keeps growing, from an email client to many apps with different needs including asynchronous downloads, voice and video streams, and data uploads, and as choosing among becomes more complicated, such as trade-offs between connectivity price, speed, quality of the connection, and energy usage, there is a growing need for a layer that sits between all these components and orchestrates all these connections. Can you say “multi-sided market”?
The operating system is the obvious place to do such trade-offs. It sits between applications and peripherals, and already provides apps with abstractions of network connectivity. As far as I know, no OS provider has stepped up with a road map “smart connectivity.” It’s decidedly not just “smart radio” as we’ve heard about with “white spaces”; the white space radio is just one of the many resources that need to be coordinated.
For example, one Wi-Fi card should be virtualized as multiple pipes, one for every app that wants to use it. Conversely, a Wi-Fi card and a 3G modem could be bonded into a single pipe should an application need additional burst capacity. And the OS should be able to swap out the physical connection associated with a logical pipe without the app having to know about it, e.g. when one walks out of a Wi-Fi hotspot and needs to switch to wide-area connectivity; the mobile phone companies are already doing this with Wi-Fi, though I don’t know how well it’s working.
That said, the natural winner in this area isn’t clear. Microsoft should be the front-runner given its installed base on laptops, its deep relationships with silicon vendors, and its experience virtualizing hardware for the benefit of applications – but it doesn’t seem interested in this kind of innovation.
Google has an existential need to make connectivity to its servers as good as it could possibly be, and the success of Android in smartphones gives it a platform for shipping client code, and credibility in writing an OS. However, it is still early in developing expertise in managing an ecosystem of hardware vendors and app developers.
The network operators don’t much end-user software expertise, but they won’t allow themselves to be commoditized without a fight, as they would be if a user’s software could choose moment-to-moment between AT&T and Verizon’s connectivity offers. The telcos have experience building and deploying connectivity management layers through orgs like 3GPP. Something like this could be built on IMS, but it’s currently a network rather than device architecture. And the network operators are unlikely to deploy software that allows the user to roam to another provider’s data pipes.
The chipset and handset vendors are in a weaker position since they compete amongst themselves so much for access to telcos. Qualcomm seems to get it, as evidenced by their Gobi vision, which is several years old now: “With Gobi, the notebook computer becomes the unifying agent between the different high speed wireless networking technologies deployed around the world and that means freedom from having to locate hotspots, more choice in carrier networks, and, ultimately, freedom to Gobi where you want without fear of losing connectivity – your lifeline to your world.” As far as I can tell, though, it doesn’t go much beyond hardware and an API for supporting multiple 3G/4G service providers on one laptop. Handset vendors like
Vendors like Samsung or HTC could make a go of it, but since network operators are very unlikely to pick a single hardware vendor, they will only be able to get an ecosystem up to scale if they collaborate in developing a standard. It’s more likely that they will line up behind the software giants when Google and/or Microsoft come forward with their solutions.
It is also possible that Cisco (or more likely, a start-up it acquires) will drive this functionality from the network layer, competing with or complementing app/pipe multiplexing software on individual devices. As Preston Marshall has outlined for cognitive radio,* future networks will adapt to user needs and organize themselves to respond to traffic flow and quality of service needs, using policy engines and cross-layer adaptation to manage multiple network structures. There is a perpetual tussle for control between the edge of the network and the center; smart communications modules will be just another installment.
* See Table 4 in Preston F Marshall, “Extending the Reach of Cognitive Radio,” Proceedings of the IEEE, vol. 97 no. 4 p. 612, April 2009
Saturday, February 12, 2011
Cisco’s Fascinating Flaky Forecast
Ed Thomas prompted me to have a look at Cisco’s recently published Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2010–2015.
The numbers are staggering: global mobile data traffic grew 2.6-fold in 2010, nearly tripling for the third year in a row; mobile video traffic will exceed 50% for the first time in 2011; and Cisco predicts that global mobile data traffic will increase 26-fold between 2010 and 2015. Big numbers forecast by someone who’ll make money if they come true are always suspect, though. While the historical data are largely indisputable – and amazing – I think the forecasts are bogus, though in interesting ways.
Flags went up at the projection of 92% CAGR in mobile traffic growth over the next five years. From the scant details on assumptions provided in the report, I suspect the overall growth is driven (more than driven, in fact) by the growth in the number of users, not by increases in per-user usage. For example, Cisco predicts that the number of mobile-only Internet users will grow 25-fold between 2010 and 2015 to reach 788 million, over half of them in “Asia Pacific” (defined to exclude Japan).
Working back from their forecast data volumes and assumptions on user growth, however, suggests that usage per user (I prefer to think in terms of Megabits/second rather than ExaBytes/month) doesn’t increase over the study period, an in fact declines.
The growth in traffic thus hinges on the global user base growing to almost 800 million mobile-only users in five years, from 14 million today. That’s staggering, and to me implausible.
If nothing else, though, this demonstrates that using Cisco’s meganumbers don’t necessarily imply an impending bandwidth crunch doesn’t hold water. It doesn’t mean there isn’t going to be one, just that growth numbers don’t imply/require it, because they’re in large part driven by hundreds of millions of new users in China.
A more fundamental flaw is that the analysis is entirely demand driven. This was probably fine when Cisco was predicting wireline use, since there is so much dark fiber that supply is essentially unlimited. However, one cannot ignore the scarcity of radio licenses. We’re near the Shannon limit of the number of bits/second that can be extracted from a Hertz of bandwidth, and massive new frequency allocations will not show up overnight. An alternative is to reduce cell size and serve more users per cell by using smart antennas; however, such a build-out will take time. I don’t know how much extra traffic one can fit into the existing infrastructure and frequencies, but Cisco should at least have made an argument that this doesn’t matter, or that it can ramp up as fast as the demand.
While there may be spare capacity in China, there’s clearly a supply question in markets that are already halfway up the growth curve, though, like the US. Cisco ignores this. In North America they’re forecasting that the number of mobile-only internet users will go from 2.6 million to 55.6 million (!). It’s reasonable to assume that these most of these new users are in places that are already consuming a lot of capacity, and that one will need more radio bandwidth to deliver more data throughput.
Cisco is forecasting that throughput will go from 0.05 ExaB/mo to 1.0 ExaB/mo for North American users. That’s a factor of 20. It’s hard to see how you get there from here without massive reengineering of the infrastructure.
So by using heroically optimistic assumptions one can get an 8x increase in capacity – nowhere near that 20x Cisco is forecasting.
And last but not least, the forecast method ignores Econ 101: if demand increases with limited supply, prices will go up, and this will suppress demand. Not only does the study ignores supply, it also ignores supply/demand interactions.
Still, let’s stipulate that the demand forecast is accurate, and that grant me that supply is going to be constrained. The consequence is that there will be millions of screaming customers over the next few years when they discover that the promise of unlimited mobile connectivity cannot be delivered. The pressure on government will be huge, and the opportunities for innovations that improve effective throughput and the user experience in a world of scarcity (relative to expectations) will be immense. A crisis is coming; and with it the opportunity to make fundamental fixes to how wireless licenses are managed, and how applications are delivered.
The numbers are staggering: global mobile data traffic grew 2.6-fold in 2010, nearly tripling for the third year in a row; mobile video traffic will exceed 50% for the first time in 2011; and Cisco predicts that global mobile data traffic will increase 26-fold between 2010 and 2015. Big numbers forecast by someone who’ll make money if they come true are always suspect, though. While the historical data are largely indisputable – and amazing – I think the forecasts are bogus, though in interesting ways.
Flags went up at the projection of 92% CAGR in mobile traffic growth over the next five years. From the scant details on assumptions provided in the report, I suspect the overall growth is driven (more than driven, in fact) by the growth in the number of users, not by increases in per-user usage. For example, Cisco predicts that the number of mobile-only Internet users will grow 25-fold between 2010 and 2015 to reach 788 million, over half of them in “Asia Pacific” (defined to exclude Japan).
Working back from their forecast data volumes and assumptions on user growth, however, suggests that usage per user (I prefer to think in terms of Megabits/second rather than ExaBytes/month) doesn’t increase over the study period, an in fact declines.
The growth in traffic thus hinges on the global user base growing to almost 800 million mobile-only users in five years, from 14 million today. That’s staggering, and to me implausible.
If nothing else, though, this demonstrates that using Cisco’s meganumbers don’t necessarily imply an impending bandwidth crunch doesn’t hold water. It doesn’t mean there isn’t going to be one, just that growth numbers don’t imply/require it, because they’re in large part driven by hundreds of millions of new users in China.
A more fundamental flaw is that the analysis is entirely demand driven. This was probably fine when Cisco was predicting wireline use, since there is so much dark fiber that supply is essentially unlimited. However, one cannot ignore the scarcity of radio licenses. We’re near the Shannon limit of the number of bits/second that can be extracted from a Hertz of bandwidth, and massive new frequency allocations will not show up overnight. An alternative is to reduce cell size and serve more users per cell by using smart antennas; however, such a build-out will take time. I don’t know how much extra traffic one can fit into the existing infrastructure and frequencies, but Cisco should at least have made an argument that this doesn’t matter, or that it can ramp up as fast as the demand.
While there may be spare capacity in China, there’s clearly a supply question in markets that are already halfway up the growth curve, though, like the US. Cisco ignores this. In North America they’re forecasting that the number of mobile-only internet users will go from 2.6 million to 55.6 million (!). It’s reasonable to assume that these most of these new users are in places that are already consuming a lot of capacity, and that one will need more radio bandwidth to deliver more data throughput.
Cisco is forecasting that throughput will go from 0.05 ExaB/mo to 1.0 ExaB/mo for North American users. That’s a factor of 20. It’s hard to see how you get there from here without massive reengineering of the infrastructure.
- One could get 2x by doubling available licenses from 400 MHz to 800 MHz; the FCC is talking about finding 500 MHz of new licenses for mobile data, but this is a pipe dream; if not in principle, then in the next five years given how slowly the gears grind in DC.
- The extra throughput isn’t coming from offloading traffic from the wireless onto the wired network; Cisco considered this, and is forecasting 39% for offload that by 2015. Let’s say they’re conservative, and it’s 50%: that’s just another 2x.
- Spectral efficiency, the bits/second that can be extracted from a Hertz of bandwidth, isn’t going to increase much. Engineers have made great strides in the last decade, we’re approaching the theoretical limits. Maybe another 50%, from 4 bps/Hz to 6 bps/Hz? Even an implausible doubling to 8 bps/Hz is just another 2x.
So by using heroically optimistic assumptions one can get an 8x increase in capacity – nowhere near that 20x Cisco is forecasting.
And last but not least, the forecast method ignores Econ 101: if demand increases with limited supply, prices will go up, and this will suppress demand. Not only does the study ignores supply, it also ignores supply/demand interactions.
Still, let’s stipulate that the demand forecast is accurate, and that grant me that supply is going to be constrained. The consequence is that there will be millions of screaming customers over the next few years when they discover that the promise of unlimited mobile connectivity cannot be delivered. The pressure on government will be huge, and the opportunities for innovations that improve effective throughput and the user experience in a world of scarcity (relative to expectations) will be immense. A crisis is coming; and with it the opportunity to make fundamental fixes to how wireless licenses are managed, and how applications are delivered.
Thursday, February 03, 2011
Ways of Knowing
Reading St Augustine’s Confessions reminded me of the Buddhist tradition's three ways of knowing, or "wisdoms": experiential/mystical, cerebral/rational, and learning/textual. (The Pāli terms are bhavana-mayā paññā, cintā-mayā paññā and suta-mayā paññā, respectively.)What strikes me about Augustine is his depth in all three methods; most people seem comfortable in one or at most two of them.
People may debate at cross purposes because they use different approaches to understand the world. Someone who thinks about the world experientially will have difficulty finding common ground with someone grounded in logic, and both may belittle someone who defers to tradition or social norms.
When I shared this idea with Dor Deasy, she pointed out that John Wesley thought faith should be approached from four perspectives: Experience, Reason, Scripture and Tradition, which map to the three above if one combines Scripture and Tradition. According to Wikipedia, the Wesleyan Quadrilateral can be seen as a matrix for interpreting the Bible in mutually complementary ways: “[T]he living core of the Christian faith was revealed in Scripture, illumined by tradition, vivified in personal experience, and confirmed by reason.”
Different personality types approach faith in different ways, though. Peter Richardson’s Four Spiritualities: Expressions of Self, Expression of Spirit uses the Meyers-Briggs personality inventory to characterize an individual’s bent. It may come down to brain physiology: I would not be surprised to learn that some people's brains are built in a way that predispose them to mystical experiences, while others are optimized for logic, or absorbing social norms.
People may debate at cross purposes because they use different approaches to understand the world. Someone who thinks about the world experientially will have difficulty finding common ground with someone grounded in logic, and both may belittle someone who defers to tradition or social norms.
When I shared this idea with Dor Deasy, she pointed out that John Wesley thought faith should be approached from four perspectives: Experience, Reason, Scripture and Tradition, which map to the three above if one combines Scripture and Tradition. According to Wikipedia, the Wesleyan Quadrilateral can be seen as a matrix for interpreting the Bible in mutually complementary ways: “[T]he living core of the Christian faith was revealed in Scripture, illumined by tradition, vivified in personal experience, and confirmed by reason.”
Different personality types approach faith in different ways, though. Peter Richardson’s Four Spiritualities: Expressions of Self, Expression of Spirit uses the Meyers-Briggs personality inventory to characterize an individual’s bent. It may come down to brain physiology: I would not be surprised to learn that some people's brains are built in a way that predispose them to mystical experiences, while others are optimized for logic, or absorbing social norms.
Sunday, January 02, 2011
Forging bits
William Gibson makes passing reference to the art and craft forging documents early on in Spook Country, telling about trips to second hand bookstores to buy just the right paper, and ageing credentials by carrying them around.
Nowadays, though, paper is optional. Checks can be deposited by snapping pictures of front and back and sending to the bank, and airlines scan pictures of boarding passes from your phone at the gate.
Paper credentials decentralize verification. When it's difficult to "call HQ" to check identity - which it used to be until very recently - the attestation had to stand on its own feet, carrying the full burden of authenticating not only its bearer but also itself. Nowadays a database look-up is instantaneous, and the database can not only produce the photo of the person making the identity claim, but can also track whether multiple claims are being asserted simultaneously in different places.
The locus of forgery thus moves from the edge to the middle: you don't hack the passport, you hack the passport database. With a suitably large investment in securing the center, it becomes harder for street freelancers to generate credentials as they go, "at retail". However, there is now a single point of failure, and a successful hack of the central database can generate an unlimited number of false documents. As always when moving from bricks to clicks, the upfront cost is huge, but the marginal cost is negligible.
The discretion of, and trust required in, the agent at the edge diminishes. When paper documents had to be checked, officers developed a feel for a fake by handling tens of thousands of them over years, and their instincts could tell them something was off long before the official notice came around. Not all of them were equally good, though, and a rookie might miss a dud that an old hand would see a mile off. Now the quality of authentication depends on the security and agility of the central repository; if it can be broken, or is slow to respond to an exploit, a hack that works will work everywhere, immediately.
One might therefore expect that digital spooks and their paymasters are working not only on building bit-bombs to disable infrastructure, but constructing trapdoors to facilitate the forgery of digital credentials. "Identity theft" is probably not the half of it; identity creation (and destruction) is much more valuable.
Nowadays, though, paper is optional. Checks can be deposited by snapping pictures of front and back and sending to the bank, and airlines scan pictures of boarding passes from your phone at the gate.
Paper credentials decentralize verification. When it's difficult to "call HQ" to check identity - which it used to be until very recently - the attestation had to stand on its own feet, carrying the full burden of authenticating not only its bearer but also itself. Nowadays a database look-up is instantaneous, and the database can not only produce the photo of the person making the identity claim, but can also track whether multiple claims are being asserted simultaneously in different places.
The locus of forgery thus moves from the edge to the middle: you don't hack the passport, you hack the passport database. With a suitably large investment in securing the center, it becomes harder for street freelancers to generate credentials as they go, "at retail". However, there is now a single point of failure, and a successful hack of the central database can generate an unlimited number of false documents. As always when moving from bricks to clicks, the upfront cost is huge, but the marginal cost is negligible.
The discretion of, and trust required in, the agent at the edge diminishes. When paper documents had to be checked, officers developed a feel for a fake by handling tens of thousands of them over years, and their instincts could tell them something was off long before the official notice came around. Not all of them were equally good, though, and a rookie might miss a dud that an old hand would see a mile off. Now the quality of authentication depends on the security and agility of the central repository; if it can be broken, or is slow to respond to an exploit, a hack that works will work everywhere, immediately.
One might therefore expect that digital spooks and their paymasters are working not only on building bit-bombs to disable infrastructure, but constructing trapdoors to facilitate the forgery of digital credentials. "Identity theft" is probably not the half of it; identity creation (and destruction) is much more valuable.
Subscribe to:
Posts (Atom)