Sunday, September 25, 2016

3D spectrum management

It is time to manage spectrum in three dimensions, rather than on a slightly wrinkled 2D sheet.

Traditional spectrum policy is mostly a two-dimensional problem. It talks about license areas (not volumes) and protection contours (not bubbles). The key determinant in managing interference between radios is the horizontal distance between them.

It’s not quite flatland, of course. Many signal propagation models (key to interference analysis) include the heights of transmitters and receivers as well as intervening obstacles, but typically horizontal distances are much, much larger than vertical ones. Satellite systems are an exception, but even here one could call it essentially 2 ½ dimensional: satellites up high, strung out like pearls along a geostationary orbit, beaming down 2D patches of coverage onto the earth.  And the propagation is essentially a line: geostationary satellites are roughly 1,000 times as high as continents are wide.

This two dimensionality started breaking down with the dense urban deployment of cellular systems. The vertical position of handsets in tall building relative to base stations started to matter, not only in positioning and pointing antenna systems appropriately but even in the requirements for locating emergency calls: the FCC’s E911 requirement now calls for vertical location information for calls originating in multi-story buildings.

New plans for constellations of low earth orbiting satellites (the last generation mostly failed: remember Teledesic?), high altitude broadband platforms like Facebook’s Aquila and Google’s Loon, and the prospect of the mass deployment of drones and clouds of cubesats are leading to spectrum management becoming a 3D world. Radios will not just be on the earth’s surface and in geostationary orbit (and a few airplanes), but at a whole range of altitudes between zero and 36,000 km.

We need to start thinking seriously about a 3D Wireless World. If that’s is the vision, the three major tasks are:
  • Build: the design and engineering challenge of creating the devices and systems to realize the vision; and the policy challenge of creating a new regulatory framework, encompassing aeronautical, aviation and communications agencies
  • Manage: the measurements, operations, and government regulations to ensure the effective and beneficial operation of the components and systems
  • Use: all the applications that use the 3D wireless infrastructure, from earth science and weather forecasting to navigation to delivering broadband services in new ways, to name a few.

The FCC has thought about 3D spectrum management in a piecemeal way over the years
  • Coexistence between “horizontal” and “vertical” services, e.g. Northpoint’s (failed) broadband distribution service in the satellite TV broadcast band; unlicensed Wi-Fi networks and Globalstar satellite uplinks in the 5 GHz band; and cellular point-to-point links and satellite earth stations in the millimeter wave bands
  • Allowing terrestrial operation in bands planned for satellite services, e.g. adding terrestrial transmitters for audio broadcast (SiriusXM) or cellular service (LightSquared)
  • And most recently, work at the ITU on command-and-control links for drones

The time has come to think holistically about the 3D Wireless World we’re busily building.

Thursday, July 28, 2016

Fitting square pegs into bicycles

To this non-lawyer, jurisprudence often seems to be metaphor mongering/mangling/wrangling -- as in Judge Easterbrook's contention that that there was no more a “law of cyberspace” than there was a “Law of the Horse" ("Cyberspace and the Law of the Horse" (1996); see also Larry Lessig's "The Law of the Horse: What Cyberlaw Might Teach").

From a recent CS Monitor comes the latest in this inexhaustible genre: "Is bitcoin money? Are Airbnbs hotels? Why courts have trouble deciding."

A couple of excerpts:

A Florida circuit-court judge’s Monday ruling that bitcoin is not money is the latest addition to a confusing jumble of definitions and regulations that have attempted to classify and control digital currency. . . . Since Florida doesn’t have a law specific to digital, or virtual, currency, applying laws that regulate money-service businesses to bitcoin transactions “is like fitting a square peg in a round hole," Pooler wrote.
... 
... jamming pegs into ill-suited holes is just what some law enforcement, tax and other regulators across the globe are trying to do as they struggle to apply traditional laws to technology innovations that defy them.

David Bach and Jonathan Sallet wrote a fascinating article about this a decade ago in FirstMonday: "The challenges of classification" (2005).

Perhaps Sallet will revisit this topic when he leaves the FCC. Do we need a new way to extend the applicability of law, given the breakneck introduction of new concepts?  For example, perhaps one should focus on use of the technology, not the technology as such. Privacy is trying this, as the community experiments with moving from notice & consent to use & disclosure. Here's another excerpt from the CS Monitor story, quoting Marco A. Santori, a partner specializing digital currency at a NYC law firm:

"Bitcoin is not a currency, it’s not a commodity, it’s a computer protocol. . . . the computer protocol is being used as money, it should be regulated as money. If it is being used a security, it should be regulated as security. If it is being used a commodity or derivative or a dessert topping, it should be regulated as such."

Friday, April 15, 2016

Hypnosis, placebo and meditation

In its recent package on The Power of Mind (issue no 3064, 12 March 2016), New Scientist includes an interview with Laurence Sugarman at Rochester.

Sugarman uses hypnosis in clinical settings. He says, “My colleagues and I propose that hypnosis is simply a skill set for influencing people. It involves facial expression, language, body movement, tone of voice, intensity, metaphor, understanding how people interpret and represent things.”

He observes that hypnosis is a medium for delivering placebo effects, and defines placebo as “the use of conditioning, expectation, social relationships and narrative paradigm to change a person’s physiology in a way that they attribute to an external intervention."

Interestingly, he believes that mindfulness meditation is an example of hypnosis. This prompted me to think of a story SN Goenka tells about a doctor giving a prescription for medicine to a sick man during the Day Three discourse of the 10 day vipassana course. It’s used to explain three kinds of wisdom:
"The man goes home, and out of great faith in his doctor, he recites the prescription every day; this is suta-maya panna [wisdom acquired by hearing or reading the words of another]. Not satisfied with that, the man returns to the doctor, and demands and receives an explanation of the prescription, why it is necessary and how it will work; this is cinta-maya panna [intellectual understanding]. Finally the man takes the medicine; only then is his disease eradicated. The benefit comes only from the third step, the bhavana-maya panna [the wisdom that develops within oneself, at the experiential level]."
Mr Goenka takes a hard line: only the medicine itself has any effect. Neither having faith in the doctor and the treatment, nor understanding rationally how the medicine works, has any benefit.

The emerging consensus on placebo seems to contradict this, at least as far as medical treatment goes. It suggests that the benefit does not only come from the story's third step, the actual taking of the medicine. Having faith in the doctor, and understanding how the medicine works, also helps.

For example, here are some excerpts from the article “Tap the placebo effect to unlock your body's healing powers” in the same New Scientist package:
"We now know that when a person is given a pill they’re told is a real medication, or any of a wide range of medical interventions, including surgery, their body creates a real physiological effect. In pain studies, placebos have been shown to dampen activity in the brain’s pain-processing areas and increase the production of the body’s own analgesic chemicals."
“One key to unlocking the body’s self-healing mechanisms seems to be the setting up of an expectation of improvement. And it works the other way too: if you think your drug has been replaced with a placebo, even a strong painkiller’s effects will be dulled.”
On why the “honest placebo”, i.e. telling patients ahead of time that their pills contain no medication, actually works: “One theory concerns the expectations set by the intervention itself. “It’s not just the drug, it’s everything that surrounds the drug,” says Kaptchuk [a placebo researcher]. Placebos are not inert substances: they are made of verbal suggestion, classical conditioning, and a lifetime’s associations learned about the cues of the medical ritual: the white coat, the office, the doctor’s manner. Any and all of these may cue the body’s healing powers.”

This suggests that faith in the effectiveness of a meditation technique, whether it’s blind faith or based on reason, is likely to strengthen the beneficial effects. Faith is obviously helps in remaining dedicated and motivated; however, it may go deeper than that. It also implies that rituals, which are often decried (not least by Mr Goenka himself), have value that goes far beyond their superficial appearance.

Monday, August 31, 2015

FCC approves robotic lawn mower, rejects worst case analysis

On August 12, 2015 the FCC granted a waiver (pdf) of some Part 15 rules to allow iRobot to market a robotic lawn mower transmitting in the 6240-6740 MHz range (Order in Proceeding 15-30).

The National Radio Astronomy Observatory (NRAO) had expressed concern that the waiver could lead to interference to radio astronomy operations.

The Office of Engineering and Technology noted in its waiver grant that "because the NRAO analysis looked at line-of-sight separation distances, it has greatly overestimated the interference potential of transmitters that are located less than two feet above the ground."

It added, "We find that when taking into account the variability in propagation characteristics due to terrain, low antenna heights and other propagation factors, grant of this waiver is very unlikely to increase the potential for harmful interference."

The glass-half-full reading is that the FCC rejected a worst-case analysis; the glass-half-empty view is that it missed an opportunity to do a probabilistic risk analysis that quantified expressions like "greatly overestimated the interference potential" and "very unlikely to increase the potential for harmful interference."

The cynic's take is that this is to be expected; providing hard numbers would expose the Commission to having its reasoning questioned during subsequent litigation.

Worst case in interference analysis for medical interference

In its second order on reconsideration regarding the incentive auction released on June 19, 2015 (docket 12-268, pdf) the FCC noted that its analysis interference into wireless medical telemetry systems its work "is a worst case analysis and in most installations one or more of the parameters we assumed here will provide additional protection" (recon order at para 119).

Even this wasn't good enough for GE Healthcare, who filed a petition on July 28, 2015 asking the FCC to reconsider its reconsideration, saying "Due to the severe and wide-ranging negative consequences of interference to Channel 37 WMTS, the Commission's expressed intent to use a worst-case (i.e. minimum coupling loss) analysis in evaluating separation between Channel 37 WMTS and 600 MHz band mobile base stations is appropriate, but its adopted separation rules are not, in fact, based on a worst-case analysis, as the Commission appears to believe."

The trouble with worst case is that there is no worst case. That is: one can always imagine something worse. It’s not a sufficiently stable concept to be usable. This leads to oxymorons like the “realistic worst-case” GE HealthCare refers to in its petition. It leads to oxymorons like the “realistic worst-case” GE HealthCare refers to.

There’s even a term of art: RWCS, the Reasonable Worst Case Scenario, that even has a definition in the UK: a scenario "designed to exclude theoretically possible scenarios which have so little probability of occurring that planning for them would be likely to lead to disproportionate use of resources” (House of Commons Select Committee Report). (There's also the term "reasonably foreseeable worst case use scenarios" that's used in passing in IEC 60601.)

It’s related to the unbounded character of the maximum of a distribution.  It’s well known among statisticians, but apparently not that many spectrum folk, that the longer you sample a parameter with an unbounded distribution (e.g. a propagation path loss with a log-normal fading distribution), the larger the maximum you find will be.

Friday, July 31, 2015

Q&A: Risk-assessment, harm claim thresholds and adjudication

In my testimony before the Senate Commerce Committee on Wednesday July 29, 2015 I recommended three spectrum management reforms. A summary and links to the written testimony and video are in an earlier blog post. This post offers some Q&A.

The three reforms were: (1) moving away from worst case interference analysis and using risk-informed methods that consider not only the consequences but also the likelihood of harmful interference; (2) providing more clarity about operators’ interference rights and obligations by specifying harm claim thresholds; and (3) giving any spectrum user the option of taking action directly against any other, either in front of an FCC judge or in a federal Court of Spectrum Claims.

Wednesday, July 29, 2015

Senate Testimony: Risk-assessment, harm claim thresholds and adjudication

I testified today before the Senate Commerce Committee hearing on “Wireless Broadband and the Future of Spectrum Policy.” My written testimony is here; this is the summary I presented during the hearing. I’ve posted some Q&A in a subsequent post. My remarks are recorded in the archived webcast, starting at 58:02; see also a question from Chairman Thune and my reply starting at 2:05:43.

Thursday, February 12, 2015

Risk-informed interference assessment

I've spent the last year or so thinking about ways to complement worst-case interference assessment with a more comprehensive approach that considers many potential interference hazards, not just a possibly implausible nightmare scenario. I have concluded that quantitative risk analysis, used in many regulated industries, is a suitable tool.

Sunday, December 28, 2014

Six weeks of spectrum auction tweets

I created an animated GIF to show how twitter traffic about spectrum auctions changed over the first six weeks of the AWS-3 auction, i.e. November 15 to December 27.



Saturday, September 06, 2014

Beyond the main stage: Teasing apart twitter traffic about net neutrality


For this installment of the NodeXL Gallery Glimpse, I'm teasing apart the members of the social graph around the net neutrality issue.


Sunday, August 31, 2014

5G on Twitter: NodeXL social network analysis

A NodeXL SNApshot is a great way to catch up with who's saying what about a topic on Twitter. This post discusses the SNApshot http://bit.ly/snapshot-26748 that graphs the 1,963 tweets containing the hashtag #5G posted over the period 17 Jul - 29 Aug 2014.

Here's the Gallery Glimpse video:




Wednesday, June 04, 2014

Adjudication versus Enforcement

Mike Marcus (web site) has suggested that enforcement problems can be divided into two categories:
#1. Cases where behavior explicitly violates existing rules, e.g. use of the wrong frequency, or equipment that doesn't comply with rules.
#2. Unanticipated interactions between systems that either lead to service degradation but do not self-evidently violate any rules, or raise complex legal issues of whether there is a violation.
Mike suggests that the second category includes "cellular booster" interference to cellular systems, police radar detector "fuzzbuster" interference to VSATs, the Nextel/public safety intermod problem in 800 MHz, and impairment of 700 MHz cellular due to FM transmitter harmonics (discussed on Mike’s blog).
The fact that the spectrum community informally refers to both categories as enforcement problems while the second is actually a question of adjudication highlights a problem caused by the FCC’s rudimentary judicial function: while it has more than 250 people in the Enforcement Bureau (2014 Budget), it only has one (!) administrative law judge.

It seems to me that (1) being clear about the enforcement/adjudication distinction and (2) actually having an adjudication function separate from both rule making (the legislative function) and enforcement (the executive function) would not only help us think more clearly about spectrum problems but would also lead to quicker resolution, to everyone's benefit.

Discussion

As an administrative agency (caveat: IANAL) the FCC combines the three branches of government under one roof: legislative, judicial and executive. It makes rules (legislative), decides whether they have been broken (judicial), and takes action to detect alleged violations, and punish them if violations are found (executive).

Mike’s Category #1 (explicit violations of existing rules) is enforcement, defined by the OED as “the act of compelling observance of or compliance with a law, rule, or obligation”: it presupposes that adjudication has already taken place. The examples in Category #2 (unanticipated interactions) are actually questions of adjudication, i.e. “A formal judgment on a disputed matter” per the OED: they're difficult precisely because it's not clear whether there's been a violation, or by whom.

The FCC is very loosey-goosey on this distinction, as has been pointed out over the years; see e.g. Ellen Goodman’s 2004 Telecosm paper, Phil Weiser’s 2009 FCC Reform paper and our recent Hamilton Project paper.

Distinguishing clearly between these two categories could also address a blind spot about the need for enforcement in the Dynamic Spectrum Access (DSA) community. If enforcement is addressed at all by advocates of Spectrum Access Systems (SAS), it’s usually waved away with assurances that the rules in the database will solve all problems. (Jerry Park’s presentation at the January 2014 FCC 3.5 GHz SAS workshop is an exception, but even he focuses on attacks on the database, rather on how to decide disputes.)

Mike's distinction made me realize that the DSA/SAS community probably equates enforcement with Category #1. It's then plausible to believe that a system that prevents explicit rules violations solves, or more accurately obviates, "enforcement problems." However, the arcane interactions between radio systems in the wild and the difficulty in assigning responsibility for them make it important to highlight the Category #2 problems: these unintended issues are not only more likely to cause problems – and cause them unexpectedly – that failures in rule sets, but by their nature they will require judgment (in both a legal sense, and in the sense of requiring assessment of hard-to-compute complexities) to resolve.

Sunday, March 02, 2014

RF Mirror Worlds: Supercomputing meets propagation models, 3D terrain data and ubiquitous sensors

Petri Mähönen has observed that wireless researchers haven’t exploited supercomputing as much as one might expect, especially in comparison with other scientific disciplines such as aerospace, meteorology, oceanography, biology, sociology... If they had, we could be exploring thousands or millions of “Test Cities” rather than just the one contemplated in the PCAST Report (pdf, Chapter 6 and Appendix G). The PCAST budget for the first three years of a Test City (Table G.1) runs to $21 million in operating expenses and $15 million in capital expenses – that would buy a lot of computation!

I suspect (hope!) we’re on the verge of a step change in using software simulation, aka “mirror worlds”, to understand and manage radio systems. The underlying technology has been on the exponential growth curve we’ve all heard about, but hasn’t broken through to high profile visibility. It may soon.

Saturday, February 22, 2014

DoD treats Spectrum as Territory

The U.S. Department of Defense released a spectrum strategy document on Thursday (press release, pdf). I’ll leave discerning what (if anything) is actually new in it to the Pentagon watchers.

I was struck by the implications of the language used: the DoD conceives of spectrum as a place. Given that military success often seems to be framed as controlling or denying territory, this is not an auspicious starting point for spectrum sharing – which is about wireless system coexistence in many intangible dimensions, rather than all-or-nothing control of territory.

Wednesday, October 16, 2013

Unlicensed’s success: physics, not regulation?

Unlicensed allocations have generated a massive, and to many surprising, amount of innovation and value (see the References below). The question is: Why?

Almost all of the value so far has come in the 2.4 GHz ISM band, mostly due to Wi-Fi but also to a lesser extent Bluetooth applications. There is never a single, simple answer to a Why question about a complicated nexus of technology, politics and user behavior, but my impression is that unlicensed partisans believe that it's due pretty much exclusively to the techno-economic characteristics enabled by the rights assignment regime: “openness” (Benkler), “managed commons” (Milgrom, Levin & Eilat), or “rule-based access” (Thanki).

I think it's at least plausible that Wi-Fi's undoubted success has been due to a fortuitous coincidence of band choice, physics and timing as much as to regulation: It turned out that the interference range was small enough that users didn’t really degrade each other’s performance; and the networking needs of their applications could be met by the bandwidth available around them. In other words: the capacity of the channel was larger than the number of people who interfered with each other, multiplied by the data they wanted to move.