More on this topic in subsequent posts:
Receiver protection limits: Two Analogies (June 2011)
Protection Limits are not "Interference Temperature Redux" (June 2011)
The LightSquared Mess Shouldn’t Count Against Coase (June 2011)
Licensing radio receivers as a way to facilitate negotiation about interference (August 2011)
Incremental management of reception: When protection limits are not sufficient (February 2012)
Four Concerns about Interference Limits (May 2012)
Transmitter versus receiver specifications: measuring loudness versus understanding (July 2012)
Testimony: Harm Claim Thresholds (November 2012)
Receiver Interference Tolerance: The Tent Analogy (November 2012)
I have also written a two-page summary document, see http://sdrv.ms/ReceiverLimits.
The LightSquared vs. GPS bun fight is a good example of this “two to tango” situation. GPS receivers – some more so than others – are designed to receive energy way outside the allocated GPS bands which means that operation in the adjacent band due to a new service like LightSquared can cause satellite location services to fail. Without the LightSquared transmissions, there wouldn’t be a problem; but likewise, if GPS receivers were designed with the appropriate filters, they could reject the adjacent LightSquared transmissions while continuing to receive the satellite location signal and function normally. [1]
While the responsibility for interference is, in theory, shared between transmitters and receivers, radio regulation has traditionally placed the onus on a new transmitter to fix any problems that may arise. [2] As I will argue, receiver standards are an impractical response; limits on reception protection, formulated in terms of the RF environment rather than equipment performance, are preferable.
Receiver Standards - an impractical response
Transmitters are thus at the mercy of cheap and nasty receivers. This has led to repeated calls for receiver standards, i.e. government mandated performance specifications. However, for the same reasons outlined above for water pollution and noise control, I feel strongly that receiver standards are the wrong way to go: they require the regulator to understand receiver performance in detail, they lock in a particular service scenario since they refer to particular receiver architectures, and they unnecessarily limit the freedom of receiver operators to respond to interference in ways that the regulator did not anticipate. [3]
Specifying receiver performance parameters to prevent cross-channel interference is complicated, since there are so many of them, and different types of receivers are characterized in different ways. The complexity of standards for receiver performance in on display in the NTIA Report 03-404, Receiver Spectrum Standards: Phase 1 – Summary of Research into Existing Standards. This document summarizes US federal agency, US industry association, and international standards. The parameters used to specify receiver standards in the NTIA Manual vary from service to service, and include adjacent channel rejection (different values for analog and digital), EMC tolerance, frequency stability, image rejection, intermodulation rejection, receiver interference suppression circuitry, selectivity, and spurious rejection (see Table 1). The Department of Agriculture’s specifications for VHF High-Band receivers is a 12 row by 6 column table of parameter values (see Table 5).
The only case I’m aware of where the FCC has provided receiver specifications, 47 CFR 15.118 regarding cable TV receivers provides very detailed requirements and measurement methods for adjacent channel interference, image channel interference, direct pickup interference, tuner overload and cable input conducted emissions.
Even if receiver standards were future-proof and practical in terms of regulatory engineering, experience shows that they are impractical in political or regulatory terms. For example, the notice of inquiry into “Interference Immunity Performance Specifications for Radio Receivers” launched in March of 2003 (ET Docket No. 03-65) was formally terminated by the Commission in 2007 without any action being taken after extensive opposition from industry.
A better way: Receiver Protection Limits
Fortunately, I believe there is an alternative approach: reception protection limits, part of the “Three P” approach (Probabilistic reception Protections and transmission Permissions - see e.g. the earlier post How I Learned to Stop Worrying and Love Interference, or the full paper on SSRN). A license would state the degree of receiver protection that it affords; then it’s up to the licensee to decide whether they want to pay extra for better receivers, or opt for cheaper ones with poorer interference rejection. A license might state that energy from other allocations would not exceed of a field strength X inside the licensed frequency range (aka “in-band”), and would not exceed field strength Y outside that range (“out-of-band”), for some large percentage of times and locations, e.g. 90% or locations, 90% of the time. Any energy up to this amount would be deemed not to be grounds for a harmful interference complaint. [4]
The number of parameters to be specified in the protection limit approach is small compared to the multifarious possibilities for receiver standards; and more importantly, the parameters are independent of the technology used in the receiver. The method for managing receiver performance, i.e. the variables to be used, is therefore the same for every band, though the parameter values may differ from band to band. With receiver standards, experience shows that each instance uses a different set of variables.
Summary
To sum up, specifying receiver performance standards is complicated, technology-bound and unlikely to survive the political gauntlet. It’s simpler and more future-proof to declare the ceiling of energy in the adjacent band that a licensee’s receivers (will) have to deal with, and then let it make the commercial calculation. Reception is protected, but only up to a point. If receivers are lousy but the regulator wants to force them to get better over time (cf. GPS) it would start with a lower out-of-band protection number (i.e. adjacent signals will be forced to be weaker) and then push it up in steps over (say) five year increments.
NOTES
[1] Some background in the blogs (for some reason, all the GPS/LightSquared briefers seem to be insanely long…): Pro GPS 1 (Andrew Seybold), Pro LightSquared (Mike Marcus), Pro GPS 2 (GPS World), Pro LightSquared 2 (Harold Feld)
[2] Even when operating within the terms of its license, a transmitter may not cause “harmful interference” (a term that is nowhere satisfactorily defined cf. Mitch Lazarus, Mike Marcus) to an existing service. 47 CFR § 2.102 (f) provides a blanket requirement that licensees shall “use frequencies so separated from the limits of a band allocated to that service as not to cause harmful interference to allocated services in immediately adjoining frequency bands.” The presumption is that newly entering transmitters cause interference to incumbent receivers. 47 USC § 302a authorizes the Commission to make rules to govern the interference potential of any transmitter (a)(1), but only receivers in the home (a)(2). Avoiding harmful interference is the (misguided, in my opinion) guiding idea: 47 USC § 303 para (f) authorizes it to “[m]ake such regulations not inconsistent with law as it may deem necessary to prevent interference between stations”; para (y)(2)(C) gives it “authority to allocate electromagnetic spectrum so as to provide flexibility of use, if . . . (2) the Commission finds, after notice and an opportunity for public comment, that . . . (C) such use would not result in harmful interference among users.”
[3] For example, the choice of whether to deal with out-of-band interference in a radio’s front end vs. baseband (e.g. improving the LNB vs. increasing the ENOB required in the ADC) is a dynamic, case-by-case trade-off; receiver standards would lock in the performance required of the front-end, and not give the design engineer the option of improving the ADC if that were cheaper.
[4] Sample Parameter Values
As an example of possible parameter values, the regulator might choose the in-band ceiling X = 57 dBuV/(m.MHz) in-band peak and average, which is the 47 CFR §15.209 per-transmitter out-of-band emission limit for intentional radiators, plus 3 dB added for the case there happen to be two close-in transmitters. Alternatively, it could be based on the ceiling on the resulting field strength outside cellular license area, e.g. 47 dBuV/m in the 2 GHz bands (§ 27.55) plus a 3 dB fudge factor for multiple signals: let’s say 50 dBuV/(m.MHz).
The out-of-band limit would be higher, e.g. Y = 108 dBuV/(m.MHz) peak and average, which is the field strength 3 meters from a 13 dBm/MHz transmitter like a CDMA cellphone: 13 dBm = 20dB (100 mW) spread over 5 MHz, less 7 dB = 5 MHz/1 MHz).
These definitions assumed that peak and average power (in time) were essentially the same, which is a valid assumption for continuous broadband operations like digital cellular. A specification might give different limits on peak and average power to accommodate intermittent transmissions, since an occasional transmission of very high power could have a low average field strength but high interference potential.
Note also that these definitions are given in terms of flux density (e.g. field strength per hertz) rather than total power in the band – again, a reasonable assumption for broadband services. Since out-of-band interference is usually the result of total delivered power, regardless of whether it is smoothed out or peaked in frequency, it may be sufficient to specify an aggregate out-of-band field strength limit, e.g. Y’ = 115 dBuV/m over the 5 MHz nearest the band edge.
No comments:
Post a Comment