Saturday, August 25, 2018

Resulting field strength rules: New reasons for an old idea

Current transmit power limits don’t provide sufficient constraints on interference, particularly when applied to modern systems (such as in the millimeter-wave bands) that deliver signal levels that change dramatically and rapidly from moment to moment, and place to place. I believe that limits on resulting field strength, rather than transmitted power, will be necessary in new allocations, particularly in the millimeter-wave bands.

Background


The purpose of transmit power rules is to define and limit the maximum interfering energy delivered to affected (aka “victim”) systems in adjacent frequency channels and geographic areas. That’s the engineering perspective; in terms of economics, the purpose is to manage externalities of radio operation that can’t be resolved by the market.

Rules are typically defined in terms of Equivalent Isotropic Radiated Power (EIRP), though there are increasing calls for the limits to be stated in terms of other quantities, like Total Radiated Power (TRP). What’s common to all of them is that they’re defined at the transmitter.

However, seen from an affected system’s point of view, this only provides usable information if the resulting field strength pattern, and thus the interference at receivers, doesn’t change much. This means that rules-at-the-transmitter don’t place an effective upfront limit on the risk of harmful interference from dynamic transmitters. They don’t give an affected system enough information to specify equipment and plan its deployment effectively, since one can’t predict the resulting interference at receivers given just the transmit power.

A variety of new and emerging technologies are increasing the directionality and agility of antennas. They include MIMO, which uses using multiple transmit and receive antennas to multiply the capacity of a radio link; electronically steered antennas (using both phased arrays and metamaterials); and coordinated multipoint (CoMP), which dynamically coordinates transmissions from multiple cellular base stations to serve one handset. Since antenna size is proportional to wavelength, millimeter-wave antennas with a large number of elements (and thus high directionality) can be very compact.

Transmitter power rules have worked well since their inception, not least because only modest antenna gain was reasonably achievable below 1 GHz. Even so, there are cases where legal EIRP transmission resulted in difficulties for receivers due to localized hot-zones, e.g. Nextel in 800 MHz interfering with public safety radios, and T-Mobile transmissions resulting in intermodulation interference in SiriusXM receivers.

Things are going to get worse. Dramatic increases in operating frequency, and thus reduction in wavelength, and advances in technology mean that modestly-sized and reasonably-priced multi-element antennas (sometimes many of them, working cooperatively) can now shape delivered energy to a few degrees in azimuth and elevation, and can change it on millisecond timescales. Consequently, the resulting field strength at a particular receiver can vary dramatically, even though the EIRP remains constant.

Recommendation


I believe that resulting signal strength rules defined anywhere a receiver could be (i.e., statistically over an area) can be more effective in managing interference and facilitating inter-operator negotiations than power limits defined at transmitters. This is true regardless of frequency, since interference happens at the receiver, not the transmitter. However, resulting signal strength rules are particularly relevant now given impending mass deployment in millimeter-wave bands, where short wavelengths and severe path loss mean that dynamic transmissions are not only feasible but also necessary (respectively).

The details of a resulting field strength rule will vary from case to case. It’s emphatically not one-size-fits-all. The rules would have to take into account the characteristics of services that are to be protected. For example, a safety-of-life service might require a very high protection probability; and a communication service’s duty cycle and symbol/frame structure would influence the time window over which a percentage-of-time limit is defined (e.g., is a field strength not-to-be-exceeded more than 1% of the time to be measured over a month, a minute or a millisecond?).

Since licensees have vested interests in the terms of their existing assignments, I do not suggest changing existing transmit power limits (e.g. on EIRP) to resulting signal strength rules. However, the new approach can and should be applied in new or revised allocations, particularly in millimeter-wave bands.

Since markets are usually more agile than regulators, there should be an option for rules – including ones like I’m suggesting – to be superseded by voluntary agreement among all affected parties. That can be achieved with language like the “shall not exceed the value specified unless the adjacent affected service area licensee(s) agree(s) to a different field strength” in § 27.55.

Precedents


The FCC already uses resulting signal strength (field strength, pfd, sometimes power) rules in variety of settings, so this is not a novel idea. Consider these U.S. examples:

  • DTV service contours, based on a field strength not to be exceeded more than some percentage of locations the contour, some percentage of the time (e.g. F(50,90) is 50% of locations, 90% of time), cf. § 73.616;

  • GEO satellite protection equivalent power flux densities (epfd, equivalent to field strength) not to exceed various dbW/m^2 values for various percentages of time (e.g. ITU RR Article 22, Table 22-1A etc);

  • Field strength limits at the edge of cellular license areas, e.g. § 24.236 and § 27.55;

  • The Northpoint proceeding specified an epfd limit for each of four US regions, and then calculated a ceiling on the increase in BSS time unavailability (FCC 02-116, § 25.208);

  • 3.5 GHz reception limits: PALs must accept adjacent channel and in-band blocking interference up to a given power spectral density level, not to exceed -40 dBm/10 MHz in any direction with greater than 99% probability (§ 96.41 (f)).

There are also precedents in other countries, notably the UK. Ofcom’s SURs (Spectrum Usage Rights) provide a case study of resulting signal strength rules, see e.g. paragraphs 1.12 to 1.16, defining SURs as the “aggregate in-band PFD at a height H [m] above ground level should not exceed X dBW/m 2/MHz at more than Z% of locations” for various purposes (more).

Ofcom SURs are limits on transmission. Building on this approach, the TAC Spectrum and Receiver Working Group proposed harm claim thresholds (aka interference limits) which are a statistical ceiling on the interfering signal levels a receiving system would have to tolerate (see the 2014 TAC working paper "Interference Limits Policy and Harm Claim Thresholds: An Introduction").

As one can see from this inventory, resulting field strength rules are usually probabilistic. This is to be expected, since resulting field strength values vary statistically over the transmitting antenna’s footprint. However, dynamic beam steering and transmitter adaptivity means that delivered energy will vary significantly from moment to moment at a given location in the future, much more so than in services we’ve seen to date where transmit power stays pretty constant and variations are due to fast fading. Statistically formulated rules will thus be even more important in the future.

Obstacles/Challenges


Dynamic beam steering emphasizes an important wrinkle: time dependency. Ofcom SURs only considered spatial statistics, and the earlier TAC work treated time dynamics in a rudimentary way.  If the delivered energy varies a great deal at a given location (not really the case in the Nextel and Sirius/T-Mobile cases) one needs to add a “at more than T% of times” criterion to the statistics. The hard part is defining the time resolution; are we talking T out of every 100 microseconds, milliseconds, minutes, months …? I’ve struggled to figure out how this works for existing %time rules and guidelines, e.g. the non-GSO epfd tables in ITU-RR Article 22, or the atmospheric outage probabilities tables in ITU-R P.618; the time resolution is often not be defined, at least as part of the rules.

A harder problem is resistance to new ideas. Ofcom SURs went nowhere. Here’s Martin Cave and William Webb in a paper last year: “… the momentum of spectrum user rights (SURs) (defined as a limit on the interference that can be expected from others in the same and neighbouring bands) as the primary way of controlling interference appears to have faltered.”

If a SURs/pfd approach is to get traction, cellular operators need to see it as serving their interests. Technicalities like time variability can then be sorted out by engineers and statisticians.

Next Steps


An obvious next step is for the FCC to issue a Notice of Inquiry requesting input on the use of resulting field strength rules. Here are some questions an NoI could ask:

  • At what frequencies, for which services, and/or in which bands would resulting field strength rules be most useful? Should such rules apply at geographic boundaries as well as frequency boundaries?

  • What lessons can be learned from previous uses of resulting field strength rules (cf. list above)?

  • How should resulting field strength rules be formulated and enforced? For example, over what area and time window should percentage of location and percentage of time values be measured? Are additional criteria necessary, e.g. a statistical confidence level that should be met?

  • How should technology neutrality be traded off against the characteristics of the service to be protected? How should changes in the service characteristics over time be accommodated?

  • Should resulting field strength rules apply to out-of-band emissions? (Current OOBE are defined isotropically, just like in-channel EIRP.)

Acknowledgement


I was inspired to revisit resulting field strength rules by a presentation Mike Marcus http://marcus-spectrum.com/ made to the FCC 2018 TAC Antenna working group.


1 comment:

Bob@weller.org said...

There's a fundamental difference in the statistical support for DTV and cellular contours and MIMO-type transmitters. In the former, the statistical dimensions of time and location (and confidence, which is often ignored and assumed to take a median value) account for variations in propagation due to terrain, multi-path interference, and atmospheric conditions. In the latter case the statistical variation is chiefly due to the intentional variation of the transmitter (including the antenna) parameters. The location and time variability statistics are fairly well-behaved over a wide range of probabilities (say, 10%-90%) because the processes are random, but MIMO will almost certainly not be random. It's not clear to me how statistics over an area due to MIMO could be determined a priori...