Sunday, March 02, 2014

RF Mirror Worlds: Supercomputing meets propagation models, 3D terrain data and ubiquitous sensors

Petri Mähönen has observed that wireless researchers haven’t exploited supercomputing as much as one might expect, especially in comparison with other scientific disciplines such as aerospace, meteorology, oceanography, biology, sociology... If they had, we could be exploring thousands or millions of “Test Cities” rather than just the one contemplated in the PCAST Report (pdf, Chapter 6 and Appendix G). The PCAST budget for the first three years of a Test City (Table G.1) runs to $21 million in operating expenses and $15 million in capital expenses – that would buy a lot of computation!

I suspect (hope!) we’re on the verge of a step change in using software simulation, aka “mirror worlds”, to understand and manage radio systems. The underlying technology has been on the exponential growth curve we’ve all heard about, but hasn’t broken through to high profile visibility. It may soon.


David Gelernter’s wonderful “Mirror Worlds: or the Day Software Puts the Universe in a Shoebox” (1993, Google Books) postulated capturing extensive data about a particular “reality” (hospital, city, etc.) and then presenting a constantly updated model of that reality on a desktop computer:
“They are software models of some chunk of reality, some piece of the real world going on outside your window. Oceans of information pour endlessly into the model (through a vast maze of software pipes and hoses): so much information that the model can mimic the reality’s every move, moment-by-moment.
“A Mirror World is some huge institution’s moving, true-to-life mirror image trapped inside a computer—where you can see and grasp it whole. The thick, dense, busy sub-world that encompasses you is also, now, an object in your hands.”
(Mirror Worlds, Ch. 1, p. 3)
This image has stayed with me. We’re not quite there yet, but Mirror Worlds may well have inspired work on radio environment maps and spectrum management databases, in the same way that Star Trek inspired the smartphone: people are talking about Dynamic Spectrum Access databases as a way to manage wireless sharing by creating an accurate model of the real world radio environment.

Moore’s Law has been chugging for decades, and has given us pocket supercomputers (aka smartphones) and Big Data. Given the accelerating shift in radio technology from analog to digital, radio systems are surfing this wave, too, and it feels as if we’re on the threshold of major change. The spectrum landscape will be seen through a software lens 5-10 years from now: the RF (radio frequency) Mirror World.

I think of the engine of the RF Mirror World  as having three interlocking components:

  1. Propagation models
  2. Terrain models
  3. Sensors

Propagation models predict the resulting signal strength at any location given the location and characteristics of transmitters, and the intervening obstacles encoded as terrain models. Propagation predictions have to be calibrated to reality, though (typical model errors are ~ 9 dB, per Phillips, Sicker & Grunwald); ubiquitous cheap sensors can provide ground truth. Each of these fields has been developing rapidly; when knitted together, the whole could be transformative.

1. Propagation models. I'm very ignorant here, but my guess is that while the models themselves are pretty well understood, computer run-time and terrain/obstruction data is the binding constraint. This will inevitably be solved by a combination of Moore’s Law and algorithm development. Wireless planning software suites like EDX SignalPro already do very sophisticated calculations, and operators are combining them with measurement to improve coverage predictions. Such tools are already integrated with 3D models (see #2 below); Here's a screenshot from the Google Earth Plug-in EDX for SignalPro V8.1:


Uncertainties remain due to weather (snow cover), season (vegetation), and human activity (aluminum semi-tractors trailers), but one can imagine that they could be compensated for dynamically using with real-time closed-loop calibration of propagation models using data from ubiquitous sensors. In fact, sensing & modeling done over a wide frequency range could help discriminate among features of different scales, e.g. leaves vs. people vs. semi-trailers, since scattering depends on the size of the scatterer.

2. Terrain models. Many propagation models assume generic terrain, like “small city” or parameters for average building height and separation. More accurate models that trace real-world propagation paths need data on terrain. Terrain models are becoming increasingly accurate, even to the point of becoming topics of regulatory dispute; for example, the FCC has proposed replacing a terrain database with 300 foot resolution with a 100 foot one. More striking is the resolution of 3D city models being used in Apple and Google maps. CyberCity3D provides maps to Google, and boasts of up-to-six inch accuracy in its textured city models; here's an image from their Facebook page:

That’s getting to the point where one can really do pretty snazzy ray tracing propagation for outdoor scenarios. One can even dream about simulating vehicles and people in the mirror world – in many cities, we now have almost-real-time traffic data at street level.  This doesn’t address the indoor case directly, but with enough processing power and measurements I can imagine AI that infers and updates architecture, wall loss etc. by constantly running models against measurements as people move their devices around inside buildings.

HP8566
3. Sensors. Understanding the radio environment is all about sensors and data integration. The cost of measurement is coming down dramatically; spectrum analyzers are mostly just computers, and benefit from Moore’s Law just like everything else. For example, a retired executive at a measurement company has told me that in 1980, the HP8566 cost $75K and weighed about 85 lbs, while today’s Agilent FieldFox N9938A costs about $35K and weighs 6.6 lbs—the basic RF performance of the two products is about the same.

Agilent FieldFox N9938A
The number of “spectrum observatories” going to climb dramatically, from a handful around the world five years ago, to dozens today, and hundreds if not thousands in a few years. In terms of more limited band-of-operation measurement, tens of millions of LTE handsets are already capable of making spectrum measurements today with mass deployment to come as part of the “Minimization of Drive Tests” feature in 3GPP cellphone specs. DARPA’s RadioMap research program has staked out the vision, seeking to “provide real-time awareness of radio spectrum use across frequency, geography and time … in part by using radios deployed for other purposes, like data and voice communications systems.”

Of course, by the "If I can think of something, someone's already done it" Rule, it's a safe bet that my speculations have already come true. But even if today's reality is more advanced than I think, that still leaves the challenge of figuring out what the world will be like in five and ten years time, and what impact that will have on business, engineering and public policy.

1 comment:

John Chapin, DARPA RadioMap Program Manager said...

Very interesting. You've laid out a vision of making RF environment models better through increased computation, better propagation/terrain models, and more sensor data, then running the models continuously as a "mirror" of the real world. But you don't say precisely in what ways the models should be improved and why improvements of those types would matter. I suggest there may be 3 axes of RF model quality to consider.

Accuracy – how big are the error bars in the RF model
Granularity – what is the smallest spatial and temporal unit in the model
Timeliness – is the model predictive, real-time (how much lag?), or post-facto (how much delay?)

To justify and guide work on RF mirror worlds, we need to think about:

1. What improvements in the model outputs would offer high value? For example, the granularity of the model directly controls the size of spectrum holes that could be exploited in a dynamic spectrum access scheme. Are there enough bands and users where dynamic spectrum access requires city-block granularity that development of a city-block granularity Mirror World is a priority?

2. In what ways does a continuously running RF environment model (a "mirror world") offer higher value than one which is invoked on request?

If you want to push forward in this area, I recommend you build a list of different applications for the RF mirror worlds – what could they do or be used for to provide benefits in the operation or development of RF systems – and categorize each one by its requirements for Accuracy, Granularity, Timeliness, and Continuity.

John