Thursday, May 26, 2022

Spectrum blind spots

We’re surprised when something unexpected happens; by definition, it’s something we hadn’t been thinking about. One way to avoid surprise is to try imagining unthought things. One way to do that is to consider how institutions stop us thinking about certain things. I’ll focus this post on spectrum topics we may not be thinking about hard enough. They include non-local interference, 3D interactions, agencies outside the FCC, the dearth of spectrum zoning, and the end of endlessly increasing demand for spectrum.

Institutions and Binaries

Sociologists and anthropologists like Emile Durkheim and Mary Douglas contend that institutions shape what individual people can think. Institutions can make us remember some things and forget others. Douglas recounts how Robert Merton “shows that the star scientists, normally benign and generous, furiously deny a convergent or earlier discovery” because science is organized around establishing priority (How Institutions Think, 1986, pp. 74–76). As another example, Thomas Kuhn famously undermined Karl Popper’s notion of science being built on falsifiability by pointing out that scientists happily ignore experiments that falsify the current paradigm. Those experiments are only talked about freely once a new paradigm appears that accounts for the weird results.

In other words: the institutions we’re embedded in can render some things almost unthinkable—and certainly mostly unthought—by most people, for long periods of time. There can be people who see the invisible and speak the unspeakable, but they’re voices in the wilderness.

The FCC’s recently opened Notice of Inquiry into receiver performance is a case in point. Radio regulation has been framed in terms of radiated energy for so long that it has been very hard to think about receivers constructively for a century. Dale Hatfield has been the voice in the wilderness, reminding the spectrum community about the importance of receivers since the 1980s.

This set me wondering about other spectrum blind spots: topics that our existing institutions make it hard to think about, let alone make progress on. 

The obvious first step is to consider other conceptual pairs similar to transmitter/receiver. Structuralists like Claude Levi-Strauss analyze culture in terms of binary oppositions, where one is usually privileged and the other marginalized. (Examples include life/death, male/female, rich/poor, cooked/raw, good/evil.) 

For the usual suspects in spectrum policy, both sides of the dichotomy are pretty visible at the moment and so they don’t qualify as blind spots:

  • Licensed/unlicensed (of course, this was a blind spot in Mike Marcus’s glory days)
  • Coupled/decoupled systems, aka closed/open loop to use John Chapin’s terminology (systems where both ends of a link can transmit and receive, like communications systems, versus systems where receivers can’t talk back to transmitters, like television and passive systems like radio astronomy and GPS)
  • Centralized/decentralized systems (systems where radio use is coordinated top-down versus systems with multiple peers, cf. cellular vs. Wi-Fi; tends to correlate with licensed/unlicensed binary)
  • Commercial/non-commercial uses (e.g., cellular systems vs. government uses)
  • Comms/non-comms services (e.g., cellular vs. radar)
  • Interference-limited vs. noise-limited systems (while this is a pretty well understood problematic, it certainly has been a source of difficulties: Nextel/public safety; LightSquared/GPS; cellular/altimeters; and cellular/radar generally.) [footnote]

Blind spot binaries

However, there are some spectrum binaries where blinds spots might lurk. The occluded terms are

  • Phenomena that are non-local in frequency, space, and time
  • Systems that coexist in 3D, not just in the terrestrial plane and up at geostationary orbit
  • Agencies outside the FCC
  • Disparate services mixed together in nearby bands
  • Surplus rather than scarcity

The item in bold is the privileged/visible/lauded, and the one in italics is marginalized/hidden/deprecated:

1. Local/non-local in frequency

Spectrum policy often starts from the assumptions that each allocation is an island, independent of services higher or lower in frequency. A few interactions with neighbors are then added to the analysis, but they’re typically not exhaustive. Some examples:

  • Worrying about out-of-band emissions (OOBE) into the band of interest from a neighboring band, and ignoring receivers being affected by signals outside the band of interest.
  • Intermodulation: out-of-band interference, but with a vengeance. Now it’s not just non-local in frequency, but also mixes several adjacent frequencies together.

Considerations that one generation understand can be forgotten by the next. I suspect non-locality was more widely understood several decades ago. For example, back in the analog TV days everyone knew about taboo channels, image frequencies etc. But as technology advanced, that became (largely) moot. 

2. Local/non-local in space

Spectrum policy making starts by assuming that interference is local because the power of RF energy decreases as the square of the distance from the transmitter, or faster. Therefore, one only needs to worry about the nearest transmitter to a receiver. However, there are important exceptions that lead to difficulties when they’re ignored:

  • Once wavelength gets short enough that directional antennas are small, additional antenna gain can deliver a signal a lot further away than the barely-directional antennas typically used below 1 GHz (wavelength ~1 foot). At 30 GHz we’re looking at ~1 cm wavelength and corresponding antenna sizes, so pencil beams are easy to form. At such frequencies it doesn’t make sense to think about a roughly circular footprint around an antenna anymore.
  • Skywave propagation, when radio waves are reflected back towards the earth from the ionosphere, can lead to distant stations being received as clearly as local one; the range can be thousands of kilometers, depending on the nature of the reflection. Most long-distance shortwave radio communication (3–30 MHz) is the result of skywave propagation, and as dominant radio services have moved higher in frequency, it’s usually dismissed as an obsolete problem. However, atmospheric ducting is affecting 5G networks operating at 2300 MHz, and it’s being addressed in Release 16 of the 3GPP spec.
  • Aggregate interference could be categorized as a non-local issue. It arises when the locality assumption, that only the closest transmitter is going to cause problems, no longer holds. As soon as aggregate interference is a viable scenario, local intuitions fail.

3. Continuous/intermittent (aka Local/non-local in time)

Spectrum regulation used to assume that transmitted (and received) energy levels didn’t fluctuate very much. This started to change with digital waveforms, when peak-to-average (power) ratio aka PA(P)R became a thing in 47 CFR. (See e.g., 47 CFR § 27.50 (a)(1)(i)(B).)

New technology is accelerating it: Massive MIMO in mid-band frequencies delivers very localized blobs of energy to a moving receiver. This means that interference varies not only in space but also in time, both as the receiver moves and as traffic fluctuates.

There’s are long-standing enforcement issues around finding intermittent interferers (cf. the 2016 FCC TAC “Study to Develop the Next Generation Systems Architecture for Radio Spectrum Interference Resolution”. This is going to become even more challenging as device mobility increases and antenna technology advances. Add the intermodulation effects of ever-wider signal bandwidth, and were talking about non-local in frequency, space and time.

4. 2D/3D

We had a whole Silicon Flatirons conference about 3D Wireless, so I won’t say much more here. Until recently it’s been a decent approximation to say that terrestrial services operate in a 2-dimensional, roughly horizontal plane, and GEO satellite service along a 1-dimensional, roughly vertical line. Even this situation yielded tricky coexistence situations (solved with blunt instruments, e.g. exclusion zones around MetSat downlink earth stations to keep out AWS-3 terrestrial cellular).

However, more satellite systems operating at various non-GEO altitudes in the same or nearby frequency bands are likely to increasingly cause interference management issues. We’ve seen it debates about aggregate U-NII unlicensed interference into satellite uplinks, and now with C-band/altimeters.

5. FCC/other agencies

Spectrum regulators (in the U.S., the FCC and the NTIA) find it hard to accept that any other department or agency could have a credible opinion on topics where the FCC-NTIA is the expert agency. (Hell hath no fury like an expert scorned.) The FCC-NTIA community doesn’t seem to think any other agency has the power, let alone the legitimacy, to act in spectrum matters. As a result, it has been repeatedly blindsided: DoD – GPS; DoT – ITS; FAA – altimeters; and I’m sure there are other cases I don’t know about.

Dale alerted me to a couple of lovely papers by Charles Lindblom about “the science of muddling through.” This is from the 1957 paper by the same name seems relevant, and makes a case for pluralism:

Suppose that each value neglected by one policy-making agency were a major concern of at least one other agency. In that case, a helpful division of labor would be achieved, and no agency need find its task beyond its capacities. The shortcomings of such a system would be that one agency might destroy a value either before another agency could be activated to safeguard it or in spite of another agency's efforts. But the possibility that important values may be lost is present in any form of organization, even where agencies attempt to comprehend in planning more than is humanly possible.

The virtue of such a hypothetical division of labor is that every important interest or value has its watchdog. And these watchdogs can protect the interests in their jurisdiction in two quite different ways: first, by redressing damages done by other agencies; and, second, by anticipating and heading off injury before it occurs.

Be that as it may: the FCC-NTIA regime likes to think it’s the only dog on the block and is blindsided when another one starts barking.

6. Commodity/unique

Here's another example of knowing and forgetting. Back in the day (before my day), it seems that spectrum zoning was well understood and widely practiced. Like services were grouped with like (in frequency). 

The notion of “spectrum as a commodity” then seemed to render that approach obsolete. The use of an economic term here is probably no accident—cf. the multi-paper debate Dale Hatfield & Phil Weiser had with Tom Hazlett in the George Mason Law Review in 2008. Economics is simpler with commodities, so: <sarcasm> “Let’s just pretend radio allocations are commodities. Sure, there are vagaries and externalities, but the market will be able to price them in.” </sarcasm> As cellular allocations spread across frequencies, the idea took hold that one could treat all frequency bands more or less the same: <sarcasm> “Auction everything as EAFUS (exclusively assigned, flexible use spectrum)!” </sarcasm> This led to many of the current tussles between cellular systems and incompatible neighbors. It’s perhaps also the cause of the demise of guard bands: <sarcasm> “If all bands are the same, what’s there to guard against?” </sarcasm>

The rise of exclusive-use license auctions in the 1990’s (and arguably also the rise of spread spectrum unlicensed) resulted in the spectrum-as-commodity idea becoming dominant. In the past it was assumed that every allocation had to be planned for a specific use, and like services were grouped with like in band regions. Nowadays, spectrum zoning isn’t much talked about.

One might argue that the emphasis on receiver performance is an attempt to keep uniqueness locked in the attic. <sarcasm> “If receivers have super-good filters, we don’t have to worry about what’s next door, right?” </sarcasm>

7. Scarcity/surplus

Our entire spectrum policy machinery is built around the presumption of scarcity: from auctions that generate revenue because companies feel they have to buy assets, to companies encouraging auctions so that they can buy licenses that exclude market entrants, to the perennial incumbent/entrant fights about the introduction of new services.

Which raises the question: What happens when spectrum isn’t scarce? In a way we’ve already got one answer: The Benkler/Werbach arguments from ~20 years ago. It turned out that spread spectrum commons didn’t alleviate scarcity any better than licensing, so hopes of a supply side solution to scarcity were dashed. 

However, the answer could also come from the demand side: What happens when cellcos can meet all the reasonable capacity demand from people on the move? Of course, betting that demand for tech will plateau is usually a mug’s game, but I think we’re seeing signs of this in the slow 5G roll-out. Customers aren’t clamoring for all the goodies in the 5G package, and 6G hype sounds like whistling past the graveyard.

The Trickster

Another perspective is the blurring of binaries, rather than suppressing one of them. 

Every categorization is arbitrary. When boundaries between formerly clear binaries become blurred, there’s usually disquiet and disruption because the existing structure is being questioned and eroded. The mythological figure of Trickster represents such destructuring: he’s transgressive, unreliable, crafty and stupid, destructive and creative. (Cf. Lévi-Strauss on the ways myths mediate oppositions; George P. Hansen, The Trickster and the Paranormal.)

Therefore, another place to look for surprises and contestation is where technologies or services emerge that don’t fit neatly into current categories. (This is of course true for all matters of law, not just spectrum policy.) I immediately think of two relatively recent examples: LTE-U (which straddled the licensed/unlicensed divide), and LightSquared’s ATC waiver (which mixed up terrestrial and satellite services).

I think industries—through their institutions, if nothing else—resist such blurring. For example, about a decade ago (?) there was talk about allowing cellular systems to operate in a more distributed fashion (I forget the buzzwords) but I haven’t heard much about it recently. 

Now, there are cases where blurring seems to have happened without any fuss, such as FirstNet, which combined commercial and non-commercial uses on one platform. The satellite industry has also been working hard to get into the 3GPP tent, and that would blur/merge terrestrial and satellite services.

The problem with this kind of taxonomy game, of course, is that it’s fun in hindsight but not very useful in prospect unless one can predict where the blurring will be. Future work…


[Footnote]

The interference-limited vs. noise-limited binary doesn’t really fit in the “one visible, the other hidden” classification since both are visible, but perhaps the noise-limited systems became less visible with the rise of the cellular (and hence interference-limited) use case.

Pulling back a bit, a wider frame is how the cellular industry has colonized FCC thinking over the last 25 years. (And why it’s done so; follow the money, I suppose.) Which leads me to wonder: If we’re now in the cellular era, what were the previous ones? I think of broadcasting (radio and TV) as dominant until cellular took over; broadcasting technology shaped received wisdom through most of the 20th century in the same way that cellular does today. It was a noise-limited system with non-locality in both frequency (taboos) and geography (night-time propagation in VHF). I’m not sure what the era before broadcasting was; perhaps comms again (wireless telegraphy)?


No comments: