Monday, February 21, 2011

Juggling Pipes: orchestrating scarce radio resources to serve multifarious applications

I concluded in Cisco’s Fascinating Flaky Forecast that the impending supply/demand mismatch in wireless data services presents opportunities for “innovations that improve effective throughput and the user experience”. This post explains one example: a software layer that that matches up various applications on a device to the most appropriate connectivity option available, mixing and matching apps to pipes to make the cheapest, fastest, or most energy efficient connection. (In academic terms, it’s a version of Joe Mitola’s Cognitive Radio vision.)

Peter Haynes recently prompted me to ask some experts what they thought the most exciting wireless technology developments were likely to be in the next decade. Mostly the answer was More of The Same; a lot of work still has to be done to realize Mitola’s vision. The most striking response was from Milind Buddhikot at Bell Labs, who suggested that the wireless network as we know it today will disappear into a datacenter by 2020, which I take to mean that network elements will be virtualized.

I don’t know about the data center, but from a device perspective it reminded me of something that’s been clear for some time: as a device’s connectivity options keep growing, from a single wired network jack to include one or more cellular data connections, Wi-Fi, Bluetooth, UWB, ZigBee etc., as the diversity of applications and their needs keeps growing, from an email client to many apps with different needs including asynchronous downloads, voice and video streams, and data uploads, and as choosing among becomes more complicated, such as trade-offs between connectivity price, speed, quality of the connection, and energy usage, there is a growing need for a layer that sits between all these components and orchestrates all these connections. Can you say “multi-sided market”?

The operating system is the obvious place to do such trade-offs. It sits between applications and peripherals, and already provides apps with abstractions of network connectivity. As far as I know, no OS provider has stepped up with a road map “smart connectivity.” It’s decidedly not just “smart radio” as we’ve heard about with “white spaces”; the white space radio is just one of the many resources that need to be coordinated.

For example, one Wi-Fi card should be virtualized as multiple pipes, one for every app that wants to use it. Conversely, a Wi-Fi card and a 3G modem could be bonded into a single pipe should an application need additional burst capacity. And the OS should be able to swap out the physical connection associated with a logical pipe without the app having to know about it, e.g. when one walks out of a Wi-Fi hotspot and needs to switch to wide-area connectivity; the mobile phone companies are already doing this with Wi-Fi, though I don’t know how well it’s working.

That said, the natural winner in this area isn’t clear. Microsoft should be the front-runner given its installed base on laptops, its deep relationships with silicon vendors, and its experience virtualizing hardware for the benefit of applications – but it doesn’t seem interested in this kind of innovation.

Google has an existential need to make connectivity to its servers as good as it could possibly be, and the success of Android in smartphones gives it a platform for shipping client code, and credibility in writing an OS. However, it is still early in developing expertise in managing an ecosystem of hardware vendors and app developers.

The network operators don’t much end-user software expertise, but they won’t allow themselves to be commoditized without a fight, as they would be if a user’s software could choose moment-to-moment between AT&T and Verizon’s connectivity offers. The telcos have experience building and deploying connectivity management layers through orgs like 3GPP. Something like this could be built on IMS, but it’s currently a network rather than device architecture. And the network operators are unlikely to deploy software that allows the user to roam to another provider’s data pipes.

The chipset and handset vendors are in a weaker position since they compete amongst themselves so much for access to telcos. Qualcomm seems to get it, as evidenced by their Gobi vision, which is several years old now: “With Gobi, the notebook computer becomes the unifying agent between the different high speed wireless networking technologies deployed around the world and that means freedom from having to locate hotspots, more choice in carrier networks, and, ultimately, freedom to Gobi where you want without fear of losing connectivity – your lifeline to your world.” As far as I can tell, though, it doesn’t go much beyond hardware and an API for supporting multiple 3G/4G service providers on one laptop. Handset vendors like

Vendors like Samsung or HTC could make a go of it, but since network operators are very unlikely to pick a single hardware vendor, they will only be able to get an ecosystem up to scale if they collaborate in developing a standard. It’s more likely that they will line up behind the software giants when Google and/or Microsoft come forward with their solutions.

It is also possible that Cisco (or more likely, a start-up it acquires) will drive this functionality from the network layer, competing with or complementing app/pipe multiplexing software on individual devices. As Preston Marshall has outlined for cognitive radio,* future networks will adapt to user needs and organize themselves to respond to traffic flow and quality of service needs, using policy engines and cross-layer adaptation to manage multiple network structures. There is a perpetual tussle for control between the edge of the network and the center; smart communications modules will be just another installment.

* See Table 4 in Preston F Marshall, “Extending the Reach of Cognitive Radio,” Proceedings of the IEEE, vol. 97 no. 4 p. 612, April 2009

No comments: