Monday, February 21, 2011

Juggling Pipes: orchestrating scarce radio resources to serve multifarious applications

I concluded in Cisco’s Fascinating Flaky Forecast that the impending supply/demand mismatch in wireless data services presents opportunities for “innovations that improve effective throughput and the user experience”. This post explains one example: a software layer that that matches up various applications on a device to the most appropriate connectivity option available, mixing and matching apps to pipes to make the cheapest, fastest, or most energy efficient connection. (In academic terms, it’s a version of Joe Mitola’s Cognitive Radio vision.)

Peter Haynes recently prompted me to ask some experts what they thought the most exciting wireless technology developments were likely to be in the next decade. Mostly the answer was More of The Same; a lot of work still has to be done to realize Mitola’s vision. The most striking response was from Milind Buddhikot at Bell Labs, who suggested that the wireless network as we know it today will disappear into a datacenter by 2020, which I take to mean that network elements will be virtualized.

I don’t know about the data center, but from a device perspective it reminded me of something that’s been clear for some time: as a device’s connectivity options keep growing, from a single wired network jack to include one or more cellular data connections, Wi-Fi, Bluetooth, UWB, ZigBee etc., as the diversity of applications and their needs keeps growing, from an email client to many apps with different needs including asynchronous downloads, voice and video streams, and data uploads, and as choosing among becomes more complicated, such as trade-offs between connectivity price, speed, quality of the connection, and energy usage, there is a growing need for a layer that sits between all these components and orchestrates all these connections. Can you say “multi-sided market”?

The operating system is the obvious place to do such trade-offs. It sits between applications and peripherals, and already provides apps with abstractions of network connectivity. As far as I know, no OS provider has stepped up with a road map “smart connectivity.” It’s decidedly not just “smart radio” as we’ve heard about with “white spaces”; the white space radio is just one of the many resources that need to be coordinated.

For example, one Wi-Fi card should be virtualized as multiple pipes, one for every app that wants to use it. Conversely, a Wi-Fi card and a 3G modem could be bonded into a single pipe should an application need additional burst capacity. And the OS should be able to swap out the physical connection associated with a logical pipe without the app having to know about it, e.g. when one walks out of a Wi-Fi hotspot and needs to switch to wide-area connectivity; the mobile phone companies are already doing this with Wi-Fi, though I don’t know how well it’s working.

That said, the natural winner in this area isn’t clear. Microsoft should be the front-runner given its installed base on laptops, its deep relationships with silicon vendors, and its experience virtualizing hardware for the benefit of applications – but it doesn’t seem interested in this kind of innovation.

Google has an existential need to make connectivity to its servers as good as it could possibly be, and the success of Android in smartphones gives it a platform for shipping client code, and credibility in writing an OS. However, it is still early in developing expertise in managing an ecosystem of hardware vendors and app developers.

The network operators don’t much end-user software expertise, but they won’t allow themselves to be commoditized without a fight, as they would be if a user’s software could choose moment-to-moment between AT&T and Verizon’s connectivity offers. The telcos have experience building and deploying connectivity management layers through orgs like 3GPP. Something like this could be built on IMS, but it’s currently a network rather than device architecture. And the network operators are unlikely to deploy software that allows the user to roam to another provider’s data pipes.

The chipset and handset vendors are in a weaker position since they compete amongst themselves so much for access to telcos. Qualcomm seems to get it, as evidenced by their Gobi vision, which is several years old now: “With Gobi, the notebook computer becomes the unifying agent between the different high speed wireless networking technologies deployed around the world and that means freedom from having to locate hotspots, more choice in carrier networks, and, ultimately, freedom to Gobi where you want without fear of losing connectivity – your lifeline to your world.” As far as I can tell, though, it doesn’t go much beyond hardware and an API for supporting multiple 3G/4G service providers on one laptop. Handset vendors like

Vendors like Samsung or HTC could make a go of it, but since network operators are very unlikely to pick a single hardware vendor, they will only be able to get an ecosystem up to scale if they collaborate in developing a standard. It’s more likely that they will line up behind the software giants when Google and/or Microsoft come forward with their solutions.

It is also possible that Cisco (or more likely, a start-up it acquires) will drive this functionality from the network layer, competing with or complementing app/pipe multiplexing software on individual devices. As Preston Marshall has outlined for cognitive radio,* future networks will adapt to user needs and organize themselves to respond to traffic flow and quality of service needs, using policy engines and cross-layer adaptation to manage multiple network structures. There is a perpetual tussle for control between the edge of the network and the center; smart communications modules will be just another installment.

* See Table 4 in Preston F Marshall, “Extending the Reach of Cognitive Radio,” Proceedings of the IEEE, vol. 97 no. 4 p. 612, April 2009

Saturday, February 12, 2011

Cisco’s Fascinating Flaky Forecast

Ed Thomas prompted me to have a look at Cisco’s recently published Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2010–2015.

The numbers are staggering: global mobile data traffic grew 2.6-fold in 2010, nearly tripling for the third year in a row; mobile video traffic will exceed 50% for the first time in 2011; and Cisco predicts that global mobile data traffic will increase 26-fold between 2010 and 2015. Big numbers forecast by someone who’ll make money if they come true are always suspect, though. While the historical data are largely indisputable – and amazing – I think the forecasts are bogus, though in interesting ways.

Flags went up at the projection of 92% CAGR in mobile traffic growth over the next five years. From the scant details on assumptions provided in the report, I suspect the overall growth is driven (more than driven, in fact) by the growth in the number of users, not by increases in per-user usage. For example, Cisco predicts that the number of mobile-only Internet users will grow 25-fold between 2010 and 2015 to reach 788 million, over half of them in “Asia Pacific” (defined to exclude Japan).

Working back from their forecast data volumes and assumptions on user growth, however, suggests that usage per user (I prefer to think in terms of Megabits/second rather than ExaBytes/month) doesn’t increase over the study period, an in fact declines.



The growth in traffic thus hinges on the global user base growing to almost 800 million mobile-only users in five years, from 14 million today. That’s staggering, and to me implausible.

If nothing else, though, this demonstrates that using Cisco’s meganumbers don’t necessarily imply an impending bandwidth crunch doesn’t hold water. It doesn’t mean there isn’t going to be one, just that growth numbers don’t imply/require it, because they’re in large part driven by hundreds of millions of new users in China.

A more fundamental flaw is that the analysis is entirely demand driven. This was probably fine when Cisco was predicting wireline use, since there is so much dark fiber that supply is essentially unlimited. However, one cannot ignore the scarcity of radio licenses. We’re near the Shannon limit of the number of bits/second that can be extracted from a Hertz of bandwidth, and massive new frequency allocations will not show up overnight. An alternative is to reduce cell size and serve more users per cell by using smart antennas; however, such a build-out will take time. I don’t know how much extra traffic one can fit into the existing infrastructure and frequencies, but Cisco should at least have made an argument that this doesn’t matter, or that it can ramp up as fast as the demand.

While there may be spare capacity in China, there’s clearly a supply question in markets that are already halfway up the growth curve, though, like the US. Cisco ignores this. In North America they’re forecasting that the number of mobile-only internet users will go from 2.6 million to 55.6 million (!). It’s reasonable to assume that these most of these new users are in places that are already consuming a lot of capacity, and that one will need more radio bandwidth to deliver more data throughput.

Cisco is forecasting that throughput will go from 0.05 ExaB/mo to 1.0 ExaB/mo for North American users. That’s a factor of 20. It’s hard to see how you get there from here without massive reengineering of the infrastructure.

  • One could get 2x by doubling available licenses from 400 MHz to 800 MHz; the FCC is talking about finding 500 MHz of new licenses for mobile data, but this is a pipe dream; if not in principle, then in the next five years given how slowly the gears grind in DC.
  • The extra throughput isn’t coming from offloading traffic from the wireless onto the wired network; Cisco considered this, and is forecasting 39% for offload that by 2015. Let’s say they’re conservative, and it’s 50%: that’s just another 2x.
  • Spectral efficiency, the bits/second that can be extracted from a Hertz of bandwidth, isn’t going to increase much. Engineers have made great strides in the last decade, we’re approaching the theoretical limits. Maybe another 50%, from 4 bps/Hz to 6 bps/Hz? Even an implausible doubling to 8 bps/Hz is just another 2x.

So by using heroically optimistic assumptions one can get an 8x increase in capacity – nowhere near that 20x Cisco is forecasting.



And last but not least, the forecast method ignores Econ 101: if demand increases with limited supply, prices will go up, and this will suppress demand. Not only does the study ignores supply, it also ignores supply/demand interactions.

Still, let’s stipulate that the demand forecast is accurate, and that grant me that supply is going to be constrained. The consequence is that there will be millions of screaming customers over the next few years when they discover that the promise of unlimited mobile connectivity cannot be delivered. The pressure on government will be huge, and the opportunities for innovations that improve effective throughput and the user experience in a world of scarcity (relative to expectations) will be immense. A crisis is coming; and with it the opportunity to make fundamental fixes to how wireless licenses are managed, and how applications are delivered.

Thursday, February 03, 2011

Ways of Knowing

Reading St Augustine’s Confessions reminded me of the Buddhist tradition's three ways of knowing, or "wisdoms": experiential/mystical, cerebral/rational, and learning/textual. (The Pāli terms are bhavana-mayā paññā, cintā-mayā paññā and suta-mayā paññā, respectively.)What strikes me about Augustine is his depth in all three methods; most people seem comfortable in one or at most two of them.

People may debate at cross purposes because they use different approaches to understand the world. Someone who thinks about the world experientially will have difficulty finding common ground with someone grounded in logic, and both may belittle someone who defers to tradition or social norms.

When I shared this idea with Dor Deasy, she pointed out that John Wesley thought faith should be approached from four perspectives: Experience, Reason, Scripture and Tradition, which map to the three above if one combines Scripture and Tradition. According to Wikipedia, the Wesleyan Quadrilateral can be seen as a matrix for interpreting the Bible in mutually complementary ways: “[T]he living core of the Christian faith was revealed in Scripture, illumined by tradition, vivified in personal experience, and confirmed by reason.”

Different personality types approach faith in different ways, though. Peter Richardson’s Four Spiritualities: Expressions of Self, Expression of Spirit uses the Meyers-Briggs personality inventory to characterize an individual’s bent. It may come down to brain physiology: I would not be surprised to learn that some people's brains are built in a way that predispose them to mystical experiences, while others are optimized for logic, or absorbing social norms.