Since radios serve people, the cellular industry’s “bandwidth per head of population” metric (aka MHz.POP) is an improvement over vanilla MHz. However, population grows and shrinks, so for a given nation percentage of population served is a better divisor than raw population; let’s say “MHz.perc” (pronounced megahertz-perk). Thus, a 100 MHz band allocated to exclusive federal use would be 100 MHz.perc; if federal use is subsequently limited to exclusion zones that cover 60% of the population, with the rest allocated to non-federal use, the 100 MHz.perc is divided between 60 MHz.perc to the feds and 40 MHz.perc to the rest. Note that adding up the sub-allocations yields the original number.

Next up: duty cycle. If (say) a 40 MHz communications band is allocated to federal use 10% of the time over 80% of the country, the feds have 3.2 MHz.perc

^{2}(40 * 0.1 *0.8) and the non-feds the remaining 36.8 MHz.perc

^{2}(pronounced megahertz-perk-squared).

One can keep adding other factors, but we’re getting into diminishing returns… And the main thing that’s missing in this proposal is a snappy moniker.

But here are some more considerations anyway: 10 MHz of bandwidth at 100 MHz is worth a lot more than 10 MHz at 10,000 MHz (aka 10 GHz), so I like to divide the bandwidth by the center frequency or, equivalently, taking a logarithm. To take an example from the chart in Mike’s blog, I doubt DOCOMO thinks its 40 MHz at 2GHz is worth twice as much as the 20 MHz at 700 MHz...

(I first saw this done by Ofcom in its 2005 Spectrum Framework Review (pdf Fig. 1.1). An equivalent method that doesn’t require picking a center frequency is to take the log of the bandwidth. To see the effect of this kind of frequency normalization, see e.g. my post The extent of FCC/NTIA frequency sharing.)

Dividing by center frequency has the added benefit of yielding a dimensionless quantity. So if the metric bandshare.perc

^{2}is defined as (bandwidth * % population * % time used)/(band center frequency), then we can compare our 40 MHz band at two frequencies, say 400 MHz and 4 GHz, keeping all the other assumptions the same (federal use 10% of the time over 80% of the country):

- 40 MHz at 400 MHz: 0.008 units for the feds = {40 * 0.1 *0.8}/400, 0.092 for the remaining users
- 40 MHz at 4 GHz: 0.0008 units for the feds, 0.0092 for the remaining users

To get rid of the decimal point, one might want to multiply everything by a million. Doesn't change the meaning, but might be more digestible...

The other consideration is transmit power or resulting field strength. Using the metric defined so far and ignoring transmit power, UWB would score very high, but in fact it’s impact (and current value) is negligible because the allowed transmit power is so low. To capture that, one might multiply the metric by the max transmit power of the use (or the ratio of the max transmit power of the particular use to the max transmit power across all services in the band). It would be an interesting and useful exercise to compute the respective spectrum metrics for TV broadcast and white space devices using this approach.

Oh, and of course none of this counts “spectrum use precluded”, a consideration that Preston Marshal has been highlighting (cf. the PCAST Report pdf, Appendix B.1, and Preston Marshall, Scalability, Density, and Decision Making in Cognitive Wireless Networks, section 4.8).

## 4 comments:

The flaw in this model is that the value of spectrum is not linearly or logarithmically related to frequency. Factors that can affect it include hardware availability, channel width and distribution (paired channels which are separated by the right amount can actually be more valuable than a contiguous block), FCC power limits (which may be dictated by neighboring assignments), purpose (higher frequencies are MORE valuable for point-to-point links), etc. It's naive to posit a simple function.

Brett is correct that many factors influence spectrum value. However, by counting spectrum using just bandwidth as we do now, we have already posited a simple function: that value is proportional to bandwidth, regardless of frequency.

Since simplification is unavoidable, the question for me is which is less inaccurate when we’re doing very rough and ready comparisons across a wide variety of bands and services: counting just bandwidth, or bandwidth normalized by frequency.

Value may or may not be proportional to bandwidth, depending (again) upon other factors. For example, for SCADA, once you have enough bandwidth to do your telemetry, more has no more value. And a wider channel with stricter OOBE (out-of-band emission) limits or lower power limits may be more constrained than a narrower channel without them. As in other things in life, it is not just size that matters.

Post a Comment