Thursday, September 29, 2011

Licensed Unlicensed: Having your Coase, and your Commons too

I lighted on the notion of issuing a handful of receiver licenses in allocations where transmitter licensees don’t control receivers (e.g. TV, GPS) to facilitate negotiations between operators in neighboring bands; details blogged here.

The same idea could be applied to unlicensed allocations, where the unbounded number of operators makes it essentially impossible for Coasian adjustments to be made: a neighbor that would like quieter unlicensed devices has nobody to make a deal with, nor do unlicensed users have an effective way to band together to make a deal if they’d like to increase their own transmit power. This approach also has the benefit, as in the receiver license case, of giving the regulator a tool for changing operating expectations over time, e.g. ratcheting down receiver protections or increasing receiver standards.

The catch-phrase “licensed unlicensed” is obviously a contradiction in terms; it’s shorthand for a regime where non-exclusive operating permissions are issued to a limited number of entities, while retaining the key characteristic that has made unlicensed successful: the ability of end users to choose for themselves what equipment to buy and deploy. These entities can use or sub-license these authorizations to build and/or sell devices to end-users.

Follow-up post


Friday, September 02, 2011

TV white space databases: A bad idea for developing countries

Now that TV white space rulemakings are in the can in the US and UK, proponents will be pitching the technology to any government that’ll listen, e.g. at the Internet Governance Forum meeting to be held in Nairobi on 27-30 September.

It’s understandable: the more widespread white space database rules, the larger device volumes will be, and thus the lower the equipment cost, leading to wider adoption – a positive feedback loop. However, white space database technology is unnecessary in many countries, particularly developing ones.

Yet it verges on dodgy ethics for companies to hype this technology to countries that don’t need it, particularly since there’s a better solution: dedicating part of the TV frequencies that are freed as a result of the transition to digital TV (the “Digital Dividend”) to unlicensed operation, without the white space bells and whistles.

Monday, August 29, 2011

Spectrum “sharing”: the convenient ambiguity of an English verb

I realized while writing Spectrum Sharing: Not really sharing, and not just spectrum that my confusion over the meaning of spectrum sharing derives from two meanings of the English verb "to share":
(1) to divide and distribute in shares, to apportion;

(2) to use, experience or occupy with others, to have in common.

For example, the first is sharing a bag of peanuts, and the second is sharing a kitchen or an MP3 file. Cellular operators and economists tend to use the word with the first meaning, and Open Spectrum advocates with the second.

But that raises the question: is the double meaning inherent in the concept, or is it just an accident of English vocabulary?

I asked some friends about the regulatory terminology in other languages; so far I have information about Arabic, Chinese and German. If you could shed light on regulatory terminology in other languages, for example French, Japanese or Spanish, please get in touch.

Tuesday, August 23, 2011

Time limiting unlicensed authorizations

I’m coming around to Tom Hazlett’s view that unlicensed devices in the TV whitespaces are a bad idea because they preclude alternative future uses for the channels now being used for TV. (For his main objections, see “Shooting Blanks on Wireless Policy,” FT.com October 5, 2010 PDF) It’s a figure-ground problem; defining whitespace operating rules on the basis of TV operations reciprocally defines viable operations in the TV “blackspace”.


One could get around the problem and still have unlicensed use, though, by time limiting the unlicensed authorization. [1] Just like build-out conditions on licenses, there would be a fixed time window within which widespread deployment should occur. If it doesn’t, the authorization is revoked.

This approach seems particularly relevant when an authorization holds great promise, but that promise is very uncertain, e.g. when the technology or the market is changing rapidly. “Sunsets” on rules are important since the passage of time invariably invalidates the premises of regulation, even as it entrenches the interests that coalesce around those regulations. [2]

Wednesday, August 17, 2011

Licensing radio receivers as a way to facilitate negotiation about interference

It’s a curious fact that, while receivers are just as much responsible for breakdowns in radio operations as transmitters [a], regulation is aimed pretty much exclusively at transmitters [b].

Since one can’t ignore the receivers in practice, arguments over interference almost invariably turn to receiver standards. Even if receiver standards were a good idea (and I don’t think they are - see my post Receiver protection limits: a better way to manage interference than receiver standards), the ability to adjust receiver performance by fiat or negotiation is limited when receivers are operated independently of transmitters.

I suspect that receiver licenses may be necessary to reach the optimum outcome in at least some cases. This post is going to take that idea out for a first test drive.

Regulators evidently have managed without receiver licenses (beyond their use as a way to fund traditional broadcasting) so far. Why introduce them now? I’ll give my usual answer: the dramatically increased demand for wireless is squeezing radio operators of widely varying kinds together to an unprecedented extent, and we no longer have the luxury of the wide gaps that allowed regulators to ignore receiver performance, and ways of managing it.

Follow-up post

Tuesday, August 09, 2011

The dark side of whitespace databases

Back in May 2009 I drafted a blog about the unintended side-effects of regulating unlicensed radios using databases. I was in the thick of TV whitespace proceeding (on the side of the proponents), and decided not to post it since it might have muddied the waters for my client.

Databases have become the Great White Hope of “dynamic spectrum access” over the last two-plus years. They are seen not only as a way to compensate for the weaknesses of “spectrum sensing” solutions, but as a way for regulators to change the rules quickly, and for unlicensed devices to work together more efficiently. For a quick background, see: FCC names nine white-space database providers, FierceWireless, Jan 2011; Michael Calabrese, “Ending Spectrum Scarcity: Building on the TV Bands Database to Access Unused Public Airwaves,” New America Foundation, Wireless Future Working Paper #25 (June 2009).

Looking back at my note, I think it's still valid. Rather than rewrite it, I’ve decided simply to repost it here as originally drafted (omitting a couple of introductory paragraphs).



Thursday, August 04, 2011

No Common Authority: Why spectrum sharing across the Fed/non-Fed boundary is a bad idea

The ISART conference this year was about sharing in the radar bands, in line with the Administration’s efforts to encourage frequency sharing between Federal and non-Federal (e.g. commercial and civilian) users (NTIA Fast Track Evaluation PDF, FCC proceeding ET docket 10-123).

While it’s true that the NTIA has studied the feasibility of reallocating Federal Government spectrum, or relocating Federal Government systems, the current political focus is on “spectrum sharing” (cf. my post Spectrum Sharing: Not really sharing, and not just spectrum) – and Federal/non-Federal sharing is the hardest possible problem.

Federal/non-Federal sharing is hard for many reasons, notably the chasm between the goals and incentives between the two groups, and thus a profound lack of trust. I’m going to focus here, though, on a seemingly technical but profound problem: the lack of a common authority that can resolve conflicts.

Follow-up post

Wednesday, August 03, 2011

Spectrum Sharing: Not really sharing, and not just spectrum

There was endless talk about spectrum sharing at ISART in Boulder last week. I’ve become increasingly confused about what those words mean, since wireless has been about more than one radio system is operating at the same time and place pretty much since the beginning.
For example, whitespace devices are said to share the UHF band with television, but the operating rules have been drawn up to ensure that whitespace devices never interfere with TV, i.e. never operate in the same place, channel and time. What’s “sharing” about that? The purpose of radio allocation from the start has been to avoid harmful interference between different radio operations, which has always been done by ensuring that two systems don’t operate in the same place, channel and time – such as two TV stations not interfering with each other.

It seems that the “new sharing” has three characteristics: (1) more boundaries (in geography, frequency and particularly time) than ever before; (2) the juxtaposition of different kinds of services that differ more from each other than they used to; and (3) sharing without central control. It’s a difference in degree, not in kind.

It’s not about sharing, since the goal is to avoid interference, i.e. to avoid sharing. It’s not about spectrum, i.e. radio frequencies, since non-interference is achieved not only by partitioning frequencies but also by dividing space, time, transmit power and the right to operate.


Sunday, June 26, 2011

The LightSquared Mess Shouldn’t Count Against Coase

It seems there’s a new meme floating around DC: I’ve been asked from both sides of the spectrum rights polemic whether the Lightsquared/GPS situation proves that Coasian make-spectrum-property advocates are crazy because the rights seem to be pretty well defined in this case, and yet the argument drags on at the FCC rather than being resolved through market deals. I suspect the source is Harold Feld’s blog My Insanely Long Field Guide to Lightsquared v. The GPS Guys where he says:

For a spectrum wonk such as myself, it simply does not get better than this. I also get one more real world example where I say to all the “property is the answer to everything” guys: “Ha! You think property is so hot? The rights are clearly defined here. Where’s your precious Coasian solution now, smart guys?”

The “Coasian” position does have its problems (see below), but this isn’t an example of one of them. I think Harold’s premise is incorrect: the rights are NOT well-defined. While LightSquared’s transmission rights are clear, GPS’s right to protection – or equivalently, LightSquared’s obligation to protect GPS receivers from its transmissions – is entirely unclear. There’s no objective, predictable definition of the protection that’s required, just a vague generalities, built into statute (see e.g. Mike Marcus’s Harmful Interference: The Definitional Challenge).

LightSquared’s transmission permissions are in some sense meaningless, since “avoiding harmful interference” will always trump whatever transmit right they have, and there’s no way to know in advance what will constitute harmful interference. I believe that’s a fundamental problem with almost all radio rights definitions to date, and why I’ve proposed the Three Ps.

The “Coasian” position’s real important problems are on view elsewhere:

(1) While negotiation between cellular operators to shift cell boundaries show that transactions can succeed in special cases, there is no evidence yet that transaction costs for disputes between different kinds of service will be low, and thus that negotiations will succeed in the general case. Even if one can ensure that rights are well defined, it may prove politically impossible to reduce the number of negotiating parties to manageable levels since radio licenses are a cheap way for the government to distribute largesse to interest groups. This is most obvious in the case of unlicensed operation, but many licensed services such as public safety and rural communications also result in a myriad of licensees.

(2) The FCC’s ability and proclivity to jump in and change operating rules (i.e. licensees rights) in the middle of the game makes regulatory lobbying more efficient than market negotiation. This may be unavoidable given law and precedent. There is no way for today’s Commission to bind tomorrow’s Commission to a path of action; legislation is the only way to do that, and even statute is subject to change.

(3) A significant chunk of radio services aren’t amenable to market forces since they’re operated by government agencies that can’t put a monetary value on their operations, and/or can’t take money in exchange for adjusted rights. Nobody is willing to quantify the cost of a slightly increased risk that an emergency responder won’t be able complete a call, or that a radar system won’t see a missile, even if those systems have a non-zero failure rate to begin with. And even if the Defense Department were willing to do a deal with a cellular company to enable cellular service somewhere, it can’t take the Cellco’s money; the dollars would flow to the Treasury, so there’s absolutely no incentive for the DoD (let alone the people who work for it) to come to some arrangement.

Wednesday, June 22, 2011

Protection Limits are not "Interference Temperature Redux"

My post Receiver Protection Limits may have left the impression that reception protection limits are similar to the dreaded and ill-fated interference temperature notion introduced in 2002 by the FCC’s Spectrum Policy Task Force.

Receiver protections are part of the "Three Ps" approach (Probabilistic reception Protections and transmission Permissions - see e.g. the earlier post How I Learned to Stop Worrying and Love Interference, or the full paper on SSRN). While both the Three P and Interference Temperatur approaches share a desire to “shift the current method for assessing interference which is based on transmitter operations, to an approach that is based on the actual radiofrequency (RF) environment,” to quote from the first paragraph of the Interference Temperature NOI and NPRM (ET Docket No. 03-237), the Three Ps approach differs from Interference Temperature in four important ways:

1. The Three Ps focus on solving out-of-band, cross-channel interference, whereas Interference Temperature is concerned with in-band, co-channel operation

2. The Three Ps are used to define new operating rights, whereas Interference Temperature tried to open up opportunities for additional operations in frequencies allocated to existing licensees

3. The Three Ps do not grant second party rights, whereas Interference Temperature permits second party operation.

4. Three Ps rights are probabilistic, whereas Interference Temperature definitions are deterministic.

Receiver protection limits: Two Analogies

I argued in Receiver protection limits that there are better ways to manage poor receivers causing cross-channel interference problems than specifying receiver standards. Here are two analogies to sharpen one’s intuition for the most appropriate way to handle such situations.

Cities increase the salinity of rivers running through them, affecting downstream agriculture. However, the choices that farmers make determine the degree of harm; some crops are much more salt-tolerant than others. In order to ensure that farms bear their part of the burden, regulators have a choice: they can either regulate which crops may be grown downstream, or they can specify a ceiling on the salinity of the water leaving the city limits, leaving it up to farmers to decide whether to plant salt-tolerant crops, perform desalination, or move their business elsewhere. Limits on salinity protection are a less interventionist solution, and don’t require regulators to have a deep understanding of the interaction between salinity, crops and local geography.

Sound pollution is another analogy to radio operation. Let’s imagine that the state has an interest in the noise levels inside houses near a freeway. It can either provide detailed regulations prescribing building set-backs and comprehensive specifications on how houses should be sound-proofed, or it could ensure that the noise level at the freeway-residential boundary won’t exceed a certain limit, leaving it up to home-owners to decide where and how to build. Again, noise ceilings are a simple and generic regulatory approach that does not limit the freedom of citizens to live as they choose, and that does not require the regulator to keep pace with ever-evolving technologies to sound-proof buildings.

Receiver protection limits: a better way to manage interference than receiver standards

Radio interference cannot simply be blamed on a transmitter; a service can also break down because a receiver should be able to, but does not, reject a signal transmitted on an adjacent channel.

More on this topic in subsequent posts:
Receiver protection limits: Two Analogies (June 2011)
Protection Limits are not "Interference Temperature Redux" (June 2011)
The LightSquared Mess Shouldn’t Count Against Coase (June 2011)
Licensing radio receivers as a way to facilitate negotiation about interference (August 2011)
Incremental management of reception: When protection limits are not sufficient (February 2012)
Four Concerns about Interference Limits (May 2012)
Transmitter versus receiver specifications: measuring loudness versus understanding (July 2012)
Testimony: Harm Claim Thresholds (November 2012)
Receiver Interference Tolerance: The Tent Analogy (November 2012)
I have also written a two-page summary document, see http://sdrv.ms/ReceiverLimits.

The LightSquared vs. GPS bun fight is a good example of this “two to tango” situation. GPS receivers – some more so than others – are designed to receive energy way outside the allocated GPS bands which means that operation in the adjacent band due to a new service like LightSquared can cause satellite location services to fail. Without the LightSquared transmissions, there wouldn’t be a problem; but likewise, if GPS receivers were designed with the appropriate filters, they could reject the adjacent LightSquared transmissions while continuing to receive the satellite location signal and function normally. [1]

While the responsibility for interference is, in theory, shared between transmitters and receivers, radio regulation has traditionally placed the onus on a new transmitter to fix any problems that may arise. [2] As I will argue, receiver standards are an impractical response; limits on reception protection, formulated in terms of the RF environment rather than equipment performance, are preferable.



Tuesday, April 19, 2011

Too strategic to be true?

The cellular industry has been very vocal in calling on the FCC to allocate more spectrum licenses to satisfy the forecast demand for mobile data services. For two examples more or less at random, see this CTIA white paper, and the 4G Americas white paper “Sustaining the Mobile Miracle” (PDF).

On reflection, though, it strikes me as rather curious behavior for cut-throat competitors. More spectrum licenses won’t satisfy the insatiable demand for wireless data capacity any more than building highways reduces traffic congestion, and while it might make strategic sense, in the short term – and isn’t that all that really matters for listed companies, when all the rhetoric is said and done? – it means that the cellcos are giving up a wonderful opportunity to make money.

If the supply of spectrum licenses were fixed, and not increased by reallocation of other services to mobile wireless, then Economics 101 dictates that the price for wireless data would rise. (This is ignored in the forecasts; see e.g. my post Cisco’s Fascinating Flaky Forecast.) Operators wouldn’t incur the capital costs of lighting up new frequencies, and so their profits would rise – a lot!

On the other hand, if more cellular licenses were made available, the carriers would not only have to buy them at auction, but they would have to buy and install the infrastructure to use them. The price they could charge for wireless data service wouldn’t change much, and so their profits would go down, or at best stay flat.

All that said, though: these companies are much, much smarter business people than I am. I must be missing something. But what?

Perhaps this is all just a big CYA operation. When the inevitable demand crunch happens (with or without new cellular licenses, demand is set to outstrip supply), the operators will be able to blame the government: “Dear customer, it’s not our fault, we’ve been asking the government to help us give you the services you want, but they didn’t come through. We’re sorry, but all we can do to make sure that those who really need wireless services get them is to increase prices.”

Tuesday, March 01, 2011

“Quiet” doesn’t mean “unused”: The Downside of Under-defined Radio Rights

The FCC has promised to find and reallocate 500 MHz of radio frequencies to satisfy the burgeoning demand for high bandwidth mobile services such as video on cell phones. The idea, the hope, is that there are lots of unused bands to be repurposed. “Unused” is a tricky notion, though. I’ll take it to mean “radio quiet”: a radio energy detector doesn’t observe much if anything at certain frequencies, and the assumption is that a new service could transmit here.

Of course, nothing is as simple as that. Let’s assume that the services that actually operate in these quiet bands – and there are always incumbents, since every frequency has one if not many nominal users – can be found a new home, and that they’ll relocate. The harder problem is that a quiet band may not in fact be usable because of the equipment in neighboring bands. The LightSquared/GPS argument is a conveniently current example. The proposal to allow LightSquared to deploy lots of ground-based transmitters in a band where to date only satellite transmissions were allowed has caused shock and outrage among GPS users who claim that their receivers cannot distinguish between the LightSquared signal in the adjacent band and the satellite location signals in the GPS channel.

Since the FCC’s rules and precedents provide almost unlimited protection against "harmful interference" (a notoriously vague term) caused by new services, an incumbent is pretty much assured that it will be held harmless against any change. The situation is exacerbated because FCC licenses only specify transmission parameters and say nothing about the radio interference environment that receivers should be able to cope with. Radio receivers are thus designed and built as if their radio environment will never change; if a band has been quiet, none of the receivers in the adjacent frequencies can cope with more intensive use, since building in that protection costs money. (For complementary perspectives on this problem, and suggested remedies, see two short papers presented at a recent conference in Washington, DC: Kwerel and Williams, De Vries and Sieh.)

Thus, just because a band is quiet doesn’t mean that it’s unoccupied; it’s probably effectively occupied by the protection afforded to the cheap receivers next door that haven’t been required to, and therefore don’t, tolerate any substantial operation in the quiet channel. It’s as if the incumbent were a householder whose property used to be passed by track along which only ox wagons passed. She didn’t have to take any precaution against her dogs being run over by a wagon, such as building a fence, and this unlimited protection still holds even when the track is turned into an arterial road, holding passing vehicles completely responsible if a dog is run over.

Money could, but might not, solve the problem. Let’s say Tom Transmitter wants to deploy a new service
in the formerly quiet band, and that this would cost the incumbent neighbor, Rae Receiver, $300 million, either in lost revenue from diminished service and/or because of precautions such as new receiver filters that are needed to reject Tom’s adjacent band signals. If the benefit to Tom is big enough, if for example he could generate $500 million in profit, Tom could compensate Rae and still come out ahead. But how is the $200 million of potential gain ($500 million - $300 million) to be divided? This depends on Rae’s rights. If she has the right to prevent any operation by Tom (i.e. she can take out an injunction against him), she can demand essentially all his profits as a condition of operationlet’s say $499 million of his $500 million, whereas if she’s entitled to damages, she can only demand $300 million for actual losses. These are very different outcomes. Under an injunction, Tom’s incremental net profit is $1 million ($500 million - $499 million) and Rae’s is $199 million ($499 million - $300 million), whereas under damages, Tom’s net profit is $200 million and Rae’s is zero.

However, since the FCC doesn’t specify whether licenses are protected by damages or injunctions, Tom and Rae can’t begin to deal, since the legal basis of the negotiation is unclear. Tom will hope that he can get the whole $200 million incremental gain, and Rae will hope for it, too – a huge difference in expectations that will almost inevitably prevent the parties from coming to an agreement.

(There are further obstacles to reaching a settlement that I won’t go into here, such as uncertainty over what action by Tom actually constitutes damage to Rae due to ambiguity in the way FCC rules are currently formulated, and the freeloader/hold-out problems with negotiations involving many parties.)

What's to be done?

1. Any inventory of “unused” radio capacity should not only itemize radio quiet bands, but also the nature of the service and receivers next door, so that the cost of relocating, protecting or degrading the incumbent service can be estimated.

2. Any new licenses that are issued should specify whether they’re protected by injunctions or damages; this will facilitate negotiation.

3. Any new license should specify the receiver protection parameters the operator can rely on, and by implication what will not be protected.

4. Regulators should start retrofitting existing licenses to this new approach by specifying the remedy (#2) and laying out a timeline over which receiver protections (#3) will be dialed down from the current open-ended “no harmful interference” condition to more realistic and objective received energy levels.

The two page position paper I referenced above gives a quick introduction to these measures; for all the gory details, see the 15 page long version on SSRN.

Monday, February 21, 2011

Juggling Pipes: orchestrating scarce radio resources to serve multifarious applications

I concluded in Cisco’s Fascinating Flaky Forecast that the impending supply/demand mismatch in wireless data services presents opportunities for “innovations that improve effective throughput and the user experience”. This post explains one example: a software layer that that matches up various applications on a device to the most appropriate connectivity option available, mixing and matching apps to pipes to make the cheapest, fastest, or most energy efficient connection. (In academic terms, it’s a version of Joe Mitola’s Cognitive Radio vision.)

Peter Haynes recently prompted me to ask some experts what they thought the most exciting wireless technology developments were likely to be in the next decade. Mostly the answer was More of The Same; a lot of work still has to be done to realize Mitola’s vision. The most striking response was from Milind Buddhikot at Bell Labs, who suggested that the wireless network as we know it today will disappear into a datacenter by 2020, which I take to mean that network elements will be virtualized.

I don’t know about the data center, but from a device perspective it reminded me of something that’s been clear for some time: as a device’s connectivity options keep growing, from a single wired network jack to include one or more cellular data connections, Wi-Fi, Bluetooth, UWB, ZigBee etc., as the diversity of applications and their needs keeps growing, from an email client to many apps with different needs including asynchronous downloads, voice and video streams, and data uploads, and as choosing among becomes more complicated, such as trade-offs between connectivity price, speed, quality of the connection, and energy usage, there is a growing need for a layer that sits between all these components and orchestrates all these connections. Can you say “multi-sided market”?

The operating system is the obvious place to do such trade-offs. It sits between applications and peripherals, and already provides apps with abstractions of network connectivity. As far as I know, no OS provider has stepped up with a road map “smart connectivity.” It’s decidedly not just “smart radio” as we’ve heard about with “white spaces”; the white space radio is just one of the many resources that need to be coordinated.

For example, one Wi-Fi card should be virtualized as multiple pipes, one for every app that wants to use it. Conversely, a Wi-Fi card and a 3G modem could be bonded into a single pipe should an application need additional burst capacity. And the OS should be able to swap out the physical connection associated with a logical pipe without the app having to know about it, e.g. when one walks out of a Wi-Fi hotspot and needs to switch to wide-area connectivity; the mobile phone companies are already doing this with Wi-Fi, though I don’t know how well it’s working.

That said, the natural winner in this area isn’t clear. Microsoft should be the front-runner given its installed base on laptops, its deep relationships with silicon vendors, and its experience virtualizing hardware for the benefit of applications – but it doesn’t seem interested in this kind of innovation.

Google has an existential need to make connectivity to its servers as good as it could possibly be, and the success of Android in smartphones gives it a platform for shipping client code, and credibility in writing an OS. However, it is still early in developing expertise in managing an ecosystem of hardware vendors and app developers.

The network operators don’t much end-user software expertise, but they won’t allow themselves to be commoditized without a fight, as they would be if a user’s software could choose moment-to-moment between AT&T and Verizon’s connectivity offers. The telcos have experience building and deploying connectivity management layers through orgs like 3GPP. Something like this could be built on IMS, but it’s currently a network rather than device architecture. And the network operators are unlikely to deploy software that allows the user to roam to another provider’s data pipes.

The chipset and handset vendors are in a weaker position since they compete amongst themselves so much for access to telcos. Qualcomm seems to get it, as evidenced by their Gobi vision, which is several years old now: “With Gobi, the notebook computer becomes the unifying agent between the different high speed wireless networking technologies deployed around the world and that means freedom from having to locate hotspots, more choice in carrier networks, and, ultimately, freedom to Gobi where you want without fear of losing connectivity – your lifeline to your world.” As far as I can tell, though, it doesn’t go much beyond hardware and an API for supporting multiple 3G/4G service providers on one laptop. Handset vendors like

Vendors like Samsung or HTC could make a go of it, but since network operators are very unlikely to pick a single hardware vendor, they will only be able to get an ecosystem up to scale if they collaborate in developing a standard. It’s more likely that they will line up behind the software giants when Google and/or Microsoft come forward with their solutions.

It is also possible that Cisco (or more likely, a start-up it acquires) will drive this functionality from the network layer, competing with or complementing app/pipe multiplexing software on individual devices. As Preston Marshall has outlined for cognitive radio,* future networks will adapt to user needs and organize themselves to respond to traffic flow and quality of service needs, using policy engines and cross-layer adaptation to manage multiple network structures. There is a perpetual tussle for control between the edge of the network and the center; smart communications modules will be just another installment.

* See Table 4 in Preston F Marshall, “Extending the Reach of Cognitive Radio,” Proceedings of the IEEE, vol. 97 no. 4 p. 612, April 2009