"Spectrum is the equivalent of our highways," says Christopher Guttman-McCabe, vice president of regulatory affairs for CTIA-The Wireless Association, an industry trade group. "That's how we move our traffic. And the volume of that traffic is increasing so dramatically that we need more lanes. We need more highways." (Joelle Tessler, "Wireless companies want a bigger slice of airwaves", Associated Press, posted to SiliconValley.com 12/28/2009)And its also as self-serving as they come. What the cellular companies need is data capacity. There are many ways to get that that don't require new radio licenses, notably increasing the density of cell towers and improving antenna technology. But those are more expensive than new licenses, hence the claim that they need "the land".
"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Monday, December 28, 2009
Spectrum as Roads
This is about as explicit as the spectrum-as-land metaphor gets:
Sunday, December 27, 2009
A music/governance metaphor
I’m still struggling to find a usable taxonomy for “new methods of governance” for the internet. A conversation with Grisha Krivchenia, a music teacher, prompted this attempt at analogy. Since my knowledge of music and its history is sketchy, any corrective comments would be gratefully received.
Let’s start with a particular musical tradition: harpsichord pieces in the High Baroque. Bach wrote the Goldberg Variations, for example, with a particular instrument and even performer (Goldberg) in mind. The performer has many options, however, regarding tempo and mood. When the same score is played on a different instrument, e.g. the piano, an additional set of choices and opportunities arise.
Same score, different instrument(s)
A score written for one instrument can be played by another one with no change; for example, one can play a flute piece on the oboe. However, figurations that were easy for the intended instrument may be hard for the new one. Some instrument changes require transpositions of notes to a new key, for example playing the flute piece on a clarinet (pitched in C and B-flat, respectively). Even if the notes are the same, the music will be different.
As an example of the music/governance metaphor in action, consider libel. The same laws of defamation apply to web pages just as much as to a paper pamphlet; however, some additional interpretation is required from the judge when applying statute and common law developed for paper to the internet.
A slightly more extensive change comes about when music scored for one ensemble (e.g. strings) is re-arranged for another (woodwinds). Both the individual and blended characters of the instruments differ, and the character of the piece can change quite markedly. A possible analogy is the application 911 requirements for phone access to emergency services to Voice over IP devices. The desired policy result and the requirements in law are the same, but the implementation may have to be different. For example, “911” is actually an area code rather than a phone number, and its implementation in VoIP was debated. Further, 911 calls are delivered to a Public Safety Answering Point (PSAP) determined by the location of the caller – which may not be easy to determine for an internet device.
Once the piano exists, it enables new forms of music. First, performers can radically rethink a piece: Glenn Gould’s Goldbergs, to cite a late example. Second, composers wrote pieces for the piano in ways that were inconceivable in the age of the harpsichord: Liszt and Chopin. An analogy in regulation might be the way in which the Kodak camera prompted the overhaul (or arguably invention) of privacy law. [1] Another one might be the way in which the internet if forcing a rethinking of common carriage rules as they apply to telecommunications carriers. [2]
New compositions, same instruments
However, new approaches to composition can come about without new instruments – the shift to atonal music (i.e. lacking a central key) associated with Berg, Schoenberg and Webern supposedly arose from the “crisis of tonality” in the late nineteenth and early twentieth century . An analogy in communications policy might be the emergence of exclusive-use radio licenses allocated by auctions in wireless regulation: they were prompted by insights from economics (e.g. Coase and the privatization movement more generally) rather than by changes in technology.
New performances
A change in venue also makes a difference. The Wikipedia article on the history of the orchestra suggests that the 18th century change from civic music making where the composer had some degree of time or control, to smaller court music making and one-off performance, placed a premium on music that was easy to learn, often with little or no rehearsal. The results were changes in musical style from the counterpoint of the baroque period to the classical style, and emphasis on new techniques such as notated dynamics and phrasing. I believe that the shift in the stakeholder landscape in telecoms from an insider’s club of a few, large firms and regulators to a global plethora of companies and regulators of all sizes is in the process of changing governance, but we don’t have the luxury of 200 years to discern the key developments.
Tentative conclusions
The analogy of music to governance is as follows:
This short taxonomy focuses on the upstream part of the performance value chain. New kinds of music arise most visibly from new compositions and/or new instruments, but performance and audience play roles in disseminating and validating them. Likewise, new forms of governance need to be enacted by courts and accepted by stakeholders before taking hold; new technology and new law are only part of the picture.
Update 12/28/2009: See the comments for some great thoughts from Jon Sallet about the role of improvisation in music and governance. His conclusion: "In a world of change and uncertainty, discretion is an important tool; discretion that is applied by professionals (like trained musicians), within guidelines (like the old rule against using augmented fourths) but that calls upon the expertise of the composer and the performer both to work, as it were, in harmony."
Footnotes
[1] Robert E. Mensel, ""Kodakers Lying in Wait": Amateur Photography and the Right of Privacy in New York 1885-1915", American Quarterly, Vol. 43, No. 1 (Mar., 1991), pp. 24-45, PDF available.
[2] James V DeLong, “Avoiding a Tech Train Wreck”, The American, May/June 2008
Let’s start with a particular musical tradition: harpsichord pieces in the High Baroque. Bach wrote the Goldberg Variations, for example, with a particular instrument and even performer (Goldberg) in mind. The performer has many options, however, regarding tempo and mood. When the same score is played on a different instrument, e.g. the piano, an additional set of choices and opportunities arise.
Music can also change purely as a result of changes in performance practice. An unattributed assertion in Wikipedia states that “changes in performance practice made by prominent musicians often reverberated in the playing of many other musicians.” Other candidates for this phenomenon is the use of bel canto in early 20th century opera, the use of a clear declamatory vocal style in the French operatic tradition, and the dramatic increases in the minimum technical accuracy required of performers of classical music. One can see this effect in governance too, particularly where common-law is used; interpretations and precedent are cumulative. An ongoing example is software patents: legal scholar Mark Lemley stated at a Silicon Flatirons conference in March 2009 that over the last three years, courts have fixed most of the problems that have been grist for the software patent debate. I presume there are also fashions in jurisprudence, just as there are in music – but here again my lack of knowledge fails me…
- Composer – policy maker (legislator or regulator with quasi-legislative powers, like the FCC)
- Score – law, rule or regulation
- Instrument – technology and social context
- Performer – judge (or quasi-judicial actor, e.g. the FCC)
- Audience – interest groups, stakeholders, citizens, etc.
In terms of new kinds of music/governance, we see
- Changes of instruments (technology) that require only minor changes in the score (law)
- Changes that prompt composers (policy makers) to invent new genres (rules), either as a result of new technologies or the internal development of the genre itself
- Changes brought about by shifts in performance (judicial) practice
The performers (judges) plays an important creative role; they can change the import of a score (law) by their interpretation in the context of a new instrument (technology). It may be that judges are most influential when the policy makers have not yet caught up with changes in technology – they are making music on new instruments using the old scores.
Update 12/28/2009: See the comments for some great thoughts from Jon Sallet about the role of improvisation in music and governance. His conclusion: "In a world of change and uncertainty, discretion is an important tool; discretion that is applied by professionals (like trained musicians), within guidelines (like the old rule against using augmented fourths) but that calls upon the expertise of the composer and the performer both to work, as it were, in harmony."
[1] Robert E. Mensel, ""Kodakers Lying in Wait": Amateur Photography and the Right of Privacy in New York 1885-1915", American Quarterly, Vol. 43, No. 1 (Mar., 1991), pp. 24-45, PDF available.
[2] James V DeLong, “Avoiding a Tech Train Wreck”, The American, May/June 2008
Saturday, December 26, 2009
A skeptic’s approach to regulation
I don’t know.
You don’t know either, even if you’re a lawyer or scholar who’s written confident diagnoses of, and persuasive curative prescriptions for, various policy problems.
If you’re a regulator, you know you don’t know.
Decision makers have always operated in a world of complexity, contradiction and confusion: you never have all the information you’d like to make a decision, and the data you do have are often inconsistent. It is not clear what is happening, and it is not clear what to do about it. What’s most striking about the last century is that policy makers seem to have been persuaded by economists that they have more control, and more insight, than they used to.
We have less control over the world than we’d like. We are either confronted by unwanted situations we cannot prevent, or desired situations are precluded. We would like to prevent unwanted situations, but can’t; or we would like favorable circumstances to continue, but they don’t.
There is a small part of the world where the will has effective control; for the rest, one has to deal with necessity, i.e. circumstances that arise whether you will or no. Science and technology since the Enlightenment has dramatically widened our scope of control; economics has piggy-backed on the success of classical physics to make large claims about its ability to explain and manage society. However, this has had the unfortunate consequence that we no longer feel comfortable accepting necessity. If a situation is avoidable – say, postponing the moment of death through a medical intervention – then it becomes tempting to think that when it comes, someone or something can be held responsible.
As Genevieve Lloyd tells it (and I understand it) in Providence Lost (2009), our culture opted to follow Descartes in his framing of free will: we should do the best we can, and leave the rest to divine Providence, which provides a comforting bound to our responsibilities. In the absence of providence, however, we have no guidance on how to deal with what lies beyond our control. As Lloyd puts it, “the fate of the Cartesian will has been to outlive the model of providence that made it emotionally viable.” She argues that Spinoza’s alternative account of free will, built on the acceptance of necessity, is better suited to our time; there is freedom in how we shape our lives in the face of necessity, and a providential deity is not required.
Our Cartesian heritage can be seen in the response to the financial collapse of recent years: someone or something had to be responsible. If only X had done Y rather than Z… but an equally plausible account is that crises and collapse are inevitable; it was only a matter of time.
I submit that the best response to an uncertain and ever-changing world is to accept it and aim at resilience rather than efficiency. Any diagnosis and prescription should always be provisional; it should be made in the knowledge that it will have to be changed. Using efficiency as the measure of a solution, as neoclassical economics might, is the mark of the neo-Cartesian mind: it assumes that we have enough knowledge of the entire system to find an optimum solution, and that we have enough control to effectuate it. In fact, an optimum probably doesn’t exist; if it does exist, it’s probably unstable; and even if a stable solution exists, we have so little control over the system that we can’t implement it.
The best conceptual framework I’ve found for analyzing problems in this way is the complex systems view, and the most helpful instantiation is the approach to managing ecosystems encapsulated in C. S. Holling’s “adaptive cycle” thinking. (See e.g. Ten Conclusions from the Resilience Project). The adaptive cycle consists of four stages: (1) exploitation of new opportunities following a disturbance; (2) conservation, the slow accumulation of capital and system richness; (3) release of accumulation through a crisis event – cf. Shumpeter’s creative destruction; and (4) reorganization, in which the groundwork for the next round is laid.
Two techniques seem to be particularly helpful in applying this approach to governance: simulation and common law. Simulation and modeling exploit the computing power we now have to explore the kinds of outcomes that may be possible given a starting point and alternative strategies; it gives one a feel for how resilient or fragile different proposed solutions may be. Simulation may also help understand outcomes; for example, Ofcom uses modeling of radio signal propagation rather than measurement to determine whether licensees in it Spectrum Usage Rights regime are guilty of harmful interference with other licensees. (See e.g. William Webb (2009), “Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms”.)
A common law approach helps at the other end of the process: Jonathan Sallet has argued persuasively that common-law reasoning is advantageous because it is a good way of creating innovative public policies, and is a sensible method of adapting government oversight to changing technological and economic conditions.
But I could be wrong…
Update 12/28/2009: See the fascinating comments from Rich Thanki, below. He takes two salient lessons from complexity theory: avoid monoculture, and develop rules of thumb. He also provides more of the usual quick Keynes quote about "slaves of some defunct economist."
You don’t know either, even if you’re a lawyer or scholar who’s written confident diagnoses of, and persuasive curative prescriptions for, various policy problems.
If you’re a regulator, you know you don’t know.
Decision makers have always operated in a world of complexity, contradiction and confusion: you never have all the information you’d like to make a decision, and the data you do have are often inconsistent. It is not clear what is happening, and it is not clear what to do about it. What’s most striking about the last century is that policy makers seem to have been persuaded by economists that they have more control, and more insight, than they used to.
We have less control over the world than we’d like. We are either confronted by unwanted situations we cannot prevent, or desired situations are precluded. We would like to prevent unwanted situations, but can’t; or we would like favorable circumstances to continue, but they don’t.
There is a small part of the world where the will has effective control; for the rest, one has to deal with necessity, i.e. circumstances that arise whether you will or no. Science and technology since the Enlightenment has dramatically widened our scope of control; economics has piggy-backed on the success of classical physics to make large claims about its ability to explain and manage society. However, this has had the unfortunate consequence that we no longer feel comfortable accepting necessity. If a situation is avoidable – say, postponing the moment of death through a medical intervention – then it becomes tempting to think that when it comes, someone or something can be held responsible.
As Genevieve Lloyd tells it (and I understand it) in Providence Lost (2009), our culture opted to follow Descartes in his framing of free will: we should do the best we can, and leave the rest to divine Providence, which provides a comforting bound to our responsibilities. In the absence of providence, however, we have no guidance on how to deal with what lies beyond our control. As Lloyd puts it, “the fate of the Cartesian will has been to outlive the model of providence that made it emotionally viable.” She argues that Spinoza’s alternative account of free will, built on the acceptance of necessity, is better suited to our time; there is freedom in how we shape our lives in the face of necessity, and a providential deity is not required.
Our Cartesian heritage can be seen in the response to the financial collapse of recent years: someone or something had to be responsible. If only X had done Y rather than Z… but an equally plausible account is that crises and collapse are inevitable; it was only a matter of time.
I submit that the best response to an uncertain and ever-changing world is to accept it and aim at resilience rather than efficiency. Any diagnosis and prescription should always be provisional; it should be made in the knowledge that it will have to be changed. Using efficiency as the measure of a solution, as neoclassical economics might, is the mark of the neo-Cartesian mind: it assumes that we have enough knowledge of the entire system to find an optimum solution, and that we have enough control to effectuate it. In fact, an optimum probably doesn’t exist; if it does exist, it’s probably unstable; and even if a stable solution exists, we have so little control over the system that we can’t implement it.
The best conceptual framework I’ve found for analyzing problems in this way is the complex systems view, and the most helpful instantiation is the approach to managing ecosystems encapsulated in C. S. Holling’s “adaptive cycle” thinking. (See e.g. Ten Conclusions from the Resilience Project). The adaptive cycle consists of four stages: (1) exploitation of new opportunities following a disturbance; (2) conservation, the slow accumulation of capital and system richness; (3) release of accumulation through a crisis event – cf. Shumpeter’s creative destruction; and (4) reorganization, in which the groundwork for the next round is laid.
Two techniques seem to be particularly helpful in applying this approach to governance: simulation and common law. Simulation and modeling exploit the computing power we now have to explore the kinds of outcomes that may be possible given a starting point and alternative strategies; it gives one a feel for how resilient or fragile different proposed solutions may be. Simulation may also help understand outcomes; for example, Ofcom uses modeling of radio signal propagation rather than measurement to determine whether licensees in it Spectrum Usage Rights regime are guilty of harmful interference with other licensees. (See e.g. William Webb (2009), “Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms”.)
A common law approach helps at the other end of the process: Jonathan Sallet has argued persuasively that common-law reasoning is advantageous because it is a good way of creating innovative public policies, and is a sensible method of adapting government oversight to changing technological and economic conditions.
But I could be wrong…
Update 12/28/2009: See the fascinating comments from Rich Thanki, below. He takes two salient lessons from complexity theory: avoid monoculture, and develop rules of thumb. He also provides more of the usual quick Keynes quote about "slaves of some defunct economist."
Thursday, December 24, 2009
Hard consequences of the soft revolution
What characteristics (if any) of 21st century communications justify a change in methods of governance?
Any change in policy has unintended consequences; some of them will be adverse. One has to think carefully before advocating radical change: the benefits of change or the costs of doing nothing should be substantial. One way of beginning a cost/benefit analysis is to understand the underlying forces.
Many arguments have been given for new internet regulation. Cowhey and Aronson (Transforming Global Information and Communication Markets 2009:17) cite three factors that will force change: the modular mixing and matching of technology building blocks; the need to span traditional policy and jurisdictional divides (aka Convergence); and the need to rely more on non-governmental institutions to coordinate and implement global policy. In my paper “Internet Governance as Forestry”, I cite three characteristics of the internet that require new responses: modularity, decentralized self-organization, and rapid change.
Let’s consider, then, the following candidates for radical, unprecedented and transformational change in the internet economy taken from these two lists: modularity, convergence, the “third sector”, decentralization, and rate of change.
Modularity
I doubt modularity will persist as a characteristic of the internet business. While it is clearly a hallmark of our current stage, it has a long history: the standardization of interchangeable parts is dated back to Eli Whitney’s process for manufacturing muskets for the US government in 1798, but there is evidence for standardization of arrowheads and uniform manufacturing techniques in the bronze age, and some anthropologists claim there was standardization of stone age tools. However, modular technology does not lead inescapably to a modular industry structure. Standard parts have not rendered pre-internet industries immune to anti-trust problems, and it is likely they will do so now. The role of modularity in the relationships between companies waxes and wanes, depending on rather than driving industry consolidation and market power.
Convergence
The good old convergence argument is a true enough, but tired. The mixing of broadcasting, telecom and intellectual property regulation brought about by common digital formats will undoubtedly require a huge amount of creative reform of regulation, but I no longer think that the result will be the abolition of regulatory categories based on the commercial and technological status quo.
I would very much like to see such an abolition; I proposed a re-organizing the FCC by policy imperatives rather than industry categories in my FCC Reform paper, but I don’t think it’s going to be practical. The human rage to classify [1] will reassert itself. Classifying by policy concern probably won’t work, sad to say, because of how regulation tends to work: take a new problem, fit it into an existing category, and apply the rules of that category. Even if this mechanism yields weird results in times of transition, it’s usually efficient and is likely to persist, even as categories change. We don’t yet have the new categories, but they may well emerge based more on how industry self-organizes than by logic. Judging by today’s behemoths, they might perhaps be networks, cloud services, devices and content (i.e. AT&T, Google/Microsoft, Apple/Dell and Hollywood) replacing broadcasting, telecom, cable and intellectual property (ABC/CBS/NBC, the old AT&T, Comcast and Hollywood).
Decentralization
The internet is no doubt much more decentralized than its forebears, e.g. the telephone network; it is by definition an affiliation of many networks, and a lot of processing is done “at the edges” rather than “in the middle”. There is a linkage between a decentralized architecture and modularity. Modularity allows decentralization, and is amplified by it. If or when either regresses to the mean, the other will tend to do so as well. Since I don’t believe that a high and increasing amount of modularity is an persistent attribute of the 21st century communications industry, I don’t believe that high and increasing decentralization is either. However, the current degree of modularity and decentralization in has probably put us into a qualitatively different regime; there has phase change, so to speak. The polity has just begun to work through the implications, and this will take a decade or more.
The “third sector”: Non-Governmental Institutions (NGOs), non-profits and civil society
Cowhey and Aronson’s interest in NGOs is based in trade, and the organizations they have in mind (ICANN, W3C, IETF) meet the four-part definition offered by Lester Salamon, a political scientist and scholar of US non-profits at Johns Hopkins: they are organizations, i.e., they have an institutional presence and structure; they are private, i.e., they are institutionally separate from the state; they are fundamentally in control of their own affairs; and membership/support is voluntary. Salamon argues that the prominence of NGOs represent an “associational revolution”. I cannot judge whether this phenomenon is transient or not; however, the large organizations clearly provide an alternative venue for governance. For example, Cowhey and Aronson argue that the IETF’s central role in internet standards came about because the US Government decided to delegate authority to it.
If one relaxes the requirement for formal institutional structure, the rise of private, voluntary engagement in politics facilitated by Web 2.0 represent an impetus and perhaps even a venue for new governance. Currently fashionable examples include http://transparencycorps.org/, http://opengov.ideascale.com/ and http://watchdog.net/; tools that facilitate engagement include http://www.opencongress.org/, http://www.opensecrets.org/lobbyists/ and http://www.govtrack.us/. The citizen’s ability to know about the activities of their legislators and petition has never been greater; tools for organizing into ad hoc coalitions (most famously the role of http://www.meetup.com/ in the 2004 and 2008 US campaigns) lead to a ferment of groups that may grow into more recognizable institutions. Policy makers will have to invent new ways to track and mollify these groups, at the very least; the Obama Administration appears to be using them to support policy making.
While the decentralized architecture of the internet and the rise of NGOs are different phenomena with different causes, Web 2.0 technologies are beginning to draw them together.
Rate of change
As to whether the rapidity of change is transformative and permanent, I think the answer is No and Yes. The rate of technical and commercial innovation on internet over the last two decades has been stunning. It has been abetted by modularity, and even more so by the ability of software to morph without having to retool a factory. (Retooling a code base is a non-trivial exercise, though.) However, the internet is growing up and it’s reasonable to expect that the industry and technology will settle into a phase of relative maturity. [2]
On the other hand, while the rate of change may not continue to accelerate, or even continue at its current pace, the political system has to adjust to the stresses that the increase to date has already imposed. William Scheuerman, for example, argues that the “social acceleration of time” has created a profound imbalance between the branches of government in liberal democratic systems like the US. [3] Even if the rate of techno-commercial innovation slows down, the rate at which global markets generate and propagate news will be a challenge for political systems whose time cycles are set in constitutions that change only very slowly, and human physiology which changes hardly at all. [4]
Back to Hard Intangibles
A change in context that forces a change in governance doesn’t need to be irreversible for the consequences to be profound. Since history is cumulative, a “phase change” in policy making is a change that never really reverts to its prior form, since the context changes with it. However, some changes are more portentous than others. I’ve argued above that the modularity, convergence and decentralization of the internet are temporary, and part of the regular cycle flow in industry structure. Changes in tempo and the rise of the third sector seem to me to be more momentous. I think both are rooted in the growing intangibility of our societies, which has been accelerated by ICT: complex software running on powerful processors linked by very fast networks.
I think there is a link back to my 2006/2007 obsession with “hard intangibles” (DeepFreeze9 thread). The ability to compose more components than the mind can manage makes programming/debugging very hard, particularly when those components are so easily mutable: it’s easier to change a line of code than to retool an assembly line. The “soft products” of these technologies, themselves complex, composable and mutable become the inputs for culture and thus policy making: it’s easier to change web artifacts and social networks than to manage a movement using letters and sailing ships.
Footnotes
[1] I first heard the term used by Rohan Bastin, Associate Professor of Anthropology at Deakin University, in a Philosopher’s Zone interview about Claude Levi-Strauss. “The human rage to classify” is also a chapter title in F. Allan Hanson, The Trouble With Culture : How Computers Are Calming The Culture Wars, SUNY Press 2007
[2] This prediction contradicts Ray Kurzweil’s contention that technological change accelerates at an exponential rate, and will continue to do so: his “Law of Accelerating Returns” [link, critique]
[3] William E. Scheuerman, Liberal Democracy and the Social Acceleration of Time (2004). Scheuerman defines social acceleration of time as “a long term yet relatively recent historical process consisting of three central elements: technological acceleration (e.g. the heightening of the rate of technological innovation), the acceleration of social change (referring to accelerated patterns of basic change in the workplace, e.g.), and the acceleration of everyday life (e.g., via new means of high-speed communication or transportation).” I’m indebted to Barb Cherry for introducing me to Scheuerman’s ideas; see e.g. her “Institutional Governance for Essential Industries Under Complexity: Providing Resilience Within the Rule of Law” CommLaw Conspectus 17.1
[4] Human thinking won’t speed up much, if at all – though tools can make it look as if it does. See for example the Edwin Hutchins’ wonderful Cognition in the Wild (1996). Hutchins contends that we need to think in terms of “socially distributed cognition” in a system that comprises people and the tools that were made for them by other people.
Any change in policy has unintended consequences; some of them will be adverse. One has to think carefully before advocating radical change: the benefits of change or the costs of doing nothing should be substantial. One way of beginning a cost/benefit analysis is to understand the underlying forces.
Many arguments have been given for new internet regulation. Cowhey and Aronson (Transforming Global Information and Communication Markets 2009:17) cite three factors that will force change: the modular mixing and matching of technology building blocks; the need to span traditional policy and jurisdictional divides (aka Convergence); and the need to rely more on non-governmental institutions to coordinate and implement global policy. In my paper “Internet Governance as Forestry”, I cite three characteristics of the internet that require new responses: modularity, decentralized self-organization, and rapid change.
Let’s consider, then, the following candidates for radical, unprecedented and transformational change in the internet economy taken from these two lists: modularity, convergence, the “third sector”, decentralization, and rate of change.
Modularity
I doubt modularity will persist as a characteristic of the internet business. While it is clearly a hallmark of our current stage, it has a long history: the standardization of interchangeable parts is dated back to Eli Whitney’s process for manufacturing muskets for the US government in 1798, but there is evidence for standardization of arrowheads and uniform manufacturing techniques in the bronze age, and some anthropologists claim there was standardization of stone age tools. However, modular technology does not lead inescapably to a modular industry structure. Standard parts have not rendered pre-internet industries immune to anti-trust problems, and it is likely they will do so now. The role of modularity in the relationships between companies waxes and wanes, depending on rather than driving industry consolidation and market power.
Convergence
The good old convergence argument is a true enough, but tired. The mixing of broadcasting, telecom and intellectual property regulation brought about by common digital formats will undoubtedly require a huge amount of creative reform of regulation, but I no longer think that the result will be the abolition of regulatory categories based on the commercial and technological status quo.
I would very much like to see such an abolition; I proposed a re-organizing the FCC by policy imperatives rather than industry categories in my FCC Reform paper, but I don’t think it’s going to be practical. The human rage to classify [1] will reassert itself. Classifying by policy concern probably won’t work, sad to say, because of how regulation tends to work: take a new problem, fit it into an existing category, and apply the rules of that category. Even if this mechanism yields weird results in times of transition, it’s usually efficient and is likely to persist, even as categories change. We don’t yet have the new categories, but they may well emerge based more on how industry self-organizes than by logic. Judging by today’s behemoths, they might perhaps be networks, cloud services, devices and content (i.e. AT&T, Google/Microsoft, Apple/Dell and Hollywood) replacing broadcasting, telecom, cable and intellectual property (ABC/CBS/NBC, the old AT&T, Comcast and Hollywood).
Decentralization
The internet is no doubt much more decentralized than its forebears, e.g. the telephone network; it is by definition an affiliation of many networks, and a lot of processing is done “at the edges” rather than “in the middle”. There is a linkage between a decentralized architecture and modularity. Modularity allows decentralization, and is amplified by it. If or when either regresses to the mean, the other will tend to do so as well. Since I don’t believe that a high and increasing amount of modularity is an persistent attribute of the 21st century communications industry, I don’t believe that high and increasing decentralization is either. However, the current degree of modularity and decentralization in has probably put us into a qualitatively different regime; there has phase change, so to speak. The polity has just begun to work through the implications, and this will take a decade or more.
The “third sector”: Non-Governmental Institutions (NGOs), non-profits and civil society
Cowhey and Aronson’s interest in NGOs is based in trade, and the organizations they have in mind (ICANN, W3C, IETF) meet the four-part definition offered by Lester Salamon, a political scientist and scholar of US non-profits at Johns Hopkins: they are organizations, i.e., they have an institutional presence and structure; they are private, i.e., they are institutionally separate from the state; they are fundamentally in control of their own affairs; and membership/support is voluntary. Salamon argues that the prominence of NGOs represent an “associational revolution”. I cannot judge whether this phenomenon is transient or not; however, the large organizations clearly provide an alternative venue for governance. For example, Cowhey and Aronson argue that the IETF’s central role in internet standards came about because the US Government decided to delegate authority to it.
If one relaxes the requirement for formal institutional structure, the rise of private, voluntary engagement in politics facilitated by Web 2.0 represent an impetus and perhaps even a venue for new governance. Currently fashionable examples include http://transparencycorps.org/, http://opengov.ideascale.com/ and http://watchdog.net/; tools that facilitate engagement include http://www.opencongress.org/, http://www.opensecrets.org/lobbyists/ and http://www.govtrack.us/. The citizen’s ability to know about the activities of their legislators and petition has never been greater; tools for organizing into ad hoc coalitions (most famously the role of http://www.meetup.com/ in the 2004 and 2008 US campaigns) lead to a ferment of groups that may grow into more recognizable institutions. Policy makers will have to invent new ways to track and mollify these groups, at the very least; the Obama Administration appears to be using them to support policy making.
While the decentralized architecture of the internet and the rise of NGOs are different phenomena with different causes, Web 2.0 technologies are beginning to draw them together.
Rate of change
As to whether the rapidity of change is transformative and permanent, I think the answer is No and Yes. The rate of technical and commercial innovation on internet over the last two decades has been stunning. It has been abetted by modularity, and even more so by the ability of software to morph without having to retool a factory. (Retooling a code base is a non-trivial exercise, though.) However, the internet is growing up and it’s reasonable to expect that the industry and technology will settle into a phase of relative maturity. [2]
On the other hand, while the rate of change may not continue to accelerate, or even continue at its current pace, the political system has to adjust to the stresses that the increase to date has already imposed. William Scheuerman, for example, argues that the “social acceleration of time” has created a profound imbalance between the branches of government in liberal democratic systems like the US. [3] Even if the rate of techno-commercial innovation slows down, the rate at which global markets generate and propagate news will be a challenge for political systems whose time cycles are set in constitutions that change only very slowly, and human physiology which changes hardly at all. [4]
Back to Hard Intangibles
A change in context that forces a change in governance doesn’t need to be irreversible for the consequences to be profound. Since history is cumulative, a “phase change” in policy making is a change that never really reverts to its prior form, since the context changes with it. However, some changes are more portentous than others. I’ve argued above that the modularity, convergence and decentralization of the internet are temporary, and part of the regular cycle flow in industry structure. Changes in tempo and the rise of the third sector seem to me to be more momentous. I think both are rooted in the growing intangibility of our societies, which has been accelerated by ICT: complex software running on powerful processors linked by very fast networks.
I think there is a link back to my 2006/2007 obsession with “hard intangibles” (DeepFreeze9 thread). The ability to compose more components than the mind can manage makes programming/debugging very hard, particularly when those components are so easily mutable: it’s easier to change a line of code than to retool an assembly line. The “soft products” of these technologies, themselves complex, composable and mutable become the inputs for culture and thus policy making: it’s easier to change web artifacts and social networks than to manage a movement using letters and sailing ships.
Footnotes
[1] I first heard the term used by Rohan Bastin, Associate Professor of Anthropology at Deakin University, in a Philosopher’s Zone interview about Claude Levi-Strauss. “The human rage to classify” is also a chapter title in F. Allan Hanson, The Trouble With Culture : How Computers Are Calming The Culture Wars, SUNY Press 2007
[2] This prediction contradicts Ray Kurzweil’s contention that technological change accelerates at an exponential rate, and will continue to do so: his “Law of Accelerating Returns” [link, critique]
[3] William E. Scheuerman, Liberal Democracy and the Social Acceleration of Time (2004). Scheuerman defines social acceleration of time as “a long term yet relatively recent historical process consisting of three central elements: technological acceleration (e.g. the heightening of the rate of technological innovation), the acceleration of social change (referring to accelerated patterns of basic change in the workplace, e.g.), and the acceleration of everyday life (e.g., via new means of high-speed communication or transportation).” I’m indebted to Barb Cherry for introducing me to Scheuerman’s ideas; see e.g. her “Institutional Governance for Essential Industries Under Complexity: Providing Resilience Within the Rule of Law” CommLaw Conspectus 17.1
[4] Human thinking won’t speed up much, if at all – though tools can make it look as if it does. See for example the Edwin Hutchins’ wonderful Cognition in the Wild (1996). Hutchins contends that we need to think in terms of “socially distributed cognition” in a system that comprises people and the tools that were made for them by other people.
Monday, December 21, 2009
Objects of governance: From things to behaviors
In spite of our penchant for abstraction, we think best in concrete terms. That means we prefer to think about things rather than processes, including when it comes to communications regulation. The growing intangibility of our world is making this harder to do, however.
The legal scholar William Boyd introduced me the concept of an “object of governance”, i.e. the explicit focus or nominal topic of regulatory activity. [1] Boyd is concerned with deforestation as an object of climate governance [2]; a quick web search throws up examples like organized crime, “The East”, the Sahel, and risk. Objects of communications regulation include personally identifiable information (PII), spectrum, phone service, and the internet.
While most of these objects are intangible, they are at least to some extent thing-like; they’re nouns. It becomes more tricky when regulation addresses behavior – that is, verbs. I’ll work through a few examples in communications regulation where the object of governance started off as a thing/noun, and is becoming a behavior/verb:
Privacy: From PII to Use
The current approach to protecting privacy on the web is rooted in the notion of data security: information exists somewhere, and needs to be protected. However, an alternative conception based on appropriate use rather than access restrictions is emerging. [3] [4] The idea is that the tradition Notice & Choice regime is complemented by use-and-obligations model where organizations disclose the purposes to which they intend to put information, and undertake to limit themselves to those uses.
Wireless regulation: From spectrum to radio operation
Radio regulation has been framed in terms of government management of a “spectrum asset” for many decades. Even though in practice the regulations concerned themselves with the operating parameters of transmitters, the idea that some underlying asset existed has been a useful fiction, particularly as the detailed technology and service choices have been increasingly privatized through auctions of general-use licenses.
However, a new generation of radio technologies has been used to call this approach into question. “Open Spectrum” advocates have argued that dynamic wireless technologies obviate many underlying assumptions of current regulation, and prefer “commons” access over exclusive licenses. [5] Some in the RF engineering community recommend that regulation take into account dynamic adaptation at all layers in the network stack, not just at the radio layer. [6] I have argued that a static, spectrum-as-asset approach is not a given; a more dynamic radio-as-trademark interference metaphor is perfectly workable. [7]
Universal Service: From telephony to internet access
The Universal Service Fund in the US, and its equivalents in other countries, was conceived of as guaranteeing phone service to those who would not otherwise be able to afford it, particularly in rural communities. There is no a great deal of debate about extending the universal service concept to the internet. However, since internet access can come in an unlimited variety of flavors, it is unclear what the goal of the program should be. Phone service is the same everywhere; but what broadband speed is “good enough”? The regulatory debate is moving away from how to fund phone service to how to define baseline access.
Common carriage: From a neutral network to network management
The most recent of these debates concerns the 21st century equivalent of common carriage for the internet. The rallying cry of Network Neutrality had satisfyingly thing-like connotations: there was a network, and it had to have the attribute of neutrality (noun/adjective). Over time is has largely been agreed that network operators should have some discretion in managing the behavior of their network. The question has now become a behavioral one: what is degree of network management (verb) is appropriate?
Implications
A shift in the objects of governance from things to behaviors suggests a shift in regulation from ex ante to ex post action, that is, from making detailed rules up-front to stating general principles and enforcing breach after the fact. In Law’s Order [8], economist David M. Friedman compares speed limits (ex ante) with reckless driving (ex post), and observes that ex post punishments are most useful when the behavior is determined by private knowledge that the regulator cannot observe.
Footnotes
[1] Note that this is not the traditional meaning of the term, which used “object” as synonymous with “objective”, e.g. Edmund Burke: “To govern according to the sense and agreement of the interests of the people is a great and glorious object of governance. This object cannot be obtained but through the medium of popular election, and popular election is a mighty evil.”
[2] Boyd, William, “Ways of Seeing in Environmental Law: How Deforestation Became an Object of
Climate Governance”, to be published in Ecology Law Quarterly
[3] Daniel J. Weitzner, Harold Abelson, Tim Berners-Lee, Joan Feigenbaum, James Hendler, Gerald J. Sussman (2007) “Information Accountability”, Computer Science and Artificial Intelligence Laboratory Technical Report, MIT-CSAIL-TR-2007-034, June 13, 2007
[4] Business Forum for Consumer Privacy, “A New Approach to Protecting Privacy in the Evolving Digital Economy: A Concept for Discussion”, March 2009
[5] Kevin Werbach (2003), "Radio Revolution: The Coming of Age of Unlicensed Wireless," New America Foundation and Public Knowledge, no date on document, dated 15 Dec 2003 on NAF site
[6] Preston Marshall (2009) “Quantifying Aspects of Cognitive Radio and Dynamic Spectrum Access Performance” (see slides 15, 16)
[7] J Pierre de Vries, (2008) "De-situating spectrum: Rethinking radio policy using non-spatial metaphors" New Frontiers in Dynamic Spectrum Access Networks, 2008 (DySPAN 2008). http://ssrn.com/abstract=1241342
[8] David M. Friedman, Law's Order: What Economics Has to Do with Law and Why It Matters, Princeton University Press: 2001. See Chapter 7 for a discussion of ex ante/ex post.
The legal scholar William Boyd introduced me the concept of an “object of governance”, i.e. the explicit focus or nominal topic of regulatory activity. [1] Boyd is concerned with deforestation as an object of climate governance [2]; a quick web search throws up examples like organized crime, “The East”, the Sahel, and risk. Objects of communications regulation include personally identifiable information (PII), spectrum, phone service, and the internet.
While most of these objects are intangible, they are at least to some extent thing-like; they’re nouns. It becomes more tricky when regulation addresses behavior – that is, verbs. I’ll work through a few examples in communications regulation where the object of governance started off as a thing/noun, and is becoming a behavior/verb:
Privacy: From PII to Use
The current approach to protecting privacy on the web is rooted in the notion of data security: information exists somewhere, and needs to be protected. However, an alternative conception based on appropriate use rather than access restrictions is emerging. [3] [4] The idea is that the tradition Notice & Choice regime is complemented by use-and-obligations model where organizations disclose the purposes to which they intend to put information, and undertake to limit themselves to those uses.
Wireless regulation: From spectrum to radio operation
Radio regulation has been framed in terms of government management of a “spectrum asset” for many decades. Even though in practice the regulations concerned themselves with the operating parameters of transmitters, the idea that some underlying asset existed has been a useful fiction, particularly as the detailed technology and service choices have been increasingly privatized through auctions of general-use licenses.
However, a new generation of radio technologies has been used to call this approach into question. “Open Spectrum” advocates have argued that dynamic wireless technologies obviate many underlying assumptions of current regulation, and prefer “commons” access over exclusive licenses. [5] Some in the RF engineering community recommend that regulation take into account dynamic adaptation at all layers in the network stack, not just at the radio layer. [6] I have argued that a static, spectrum-as-asset approach is not a given; a more dynamic radio-as-trademark interference metaphor is perfectly workable. [7]
Universal Service: From telephony to internet access
The Universal Service Fund in the US, and its equivalents in other countries, was conceived of as guaranteeing phone service to those who would not otherwise be able to afford it, particularly in rural communities. There is no a great deal of debate about extending the universal service concept to the internet. However, since internet access can come in an unlimited variety of flavors, it is unclear what the goal of the program should be. Phone service is the same everywhere; but what broadband speed is “good enough”? The regulatory debate is moving away from how to fund phone service to how to define baseline access.
Common carriage: From a neutral network to network management
The most recent of these debates concerns the 21st century equivalent of common carriage for the internet. The rallying cry of Network Neutrality had satisfyingly thing-like connotations: there was a network, and it had to have the attribute of neutrality (noun/adjective). Over time is has largely been agreed that network operators should have some discretion in managing the behavior of their network. The question has now become a behavioral one: what is degree of network management (verb) is appropriate?
Implications
A shift in the objects of governance from things to behaviors suggests a shift in regulation from ex ante to ex post action, that is, from making detailed rules up-front to stating general principles and enforcing breach after the fact. In Law’s Order [8], economist David M. Friedman compares speed limits (ex ante) with reckless driving (ex post), and observes that ex post punishments are most useful when the behavior is determined by private knowledge that the regulator cannot observe.
"Ex ante punishments can be imposed only on behavior that a traffic cop can observe; so far, at least, that does not include what is going on inside my head. Ex post punishments can be imposed for outcomes that can be observed due to behavior that cannot—when what is going on inside my head results in my running a red light and colliding with another automobile."When an object of governance is thing-like, and changes in the attributes of those things are easily observed – a data breach occurs, some packets don’t cross the network – then ex ante rules are attractive. When governance concerns behavior, particularly behavior that is difficult to observe – the uses to which data is put by a company, whether a particular network management technique discriminates against a competitor – then the regulator has to fall back on ex post enforcement. The difficulties with ex post are well-known, though: from providing sufficient clarity up-front about what would constitute a breach, to the political difficulty of exacting very occasional but very large penalties from powerful players.
Footnotes
[1] Note that this is not the traditional meaning of the term, which used “object” as synonymous with “objective”, e.g. Edmund Burke: “To govern according to the sense and agreement of the interests of the people is a great and glorious object of governance. This object cannot be obtained but through the medium of popular election, and popular election is a mighty evil.”
[2] Boyd, William, “Ways of Seeing in Environmental Law: How Deforestation Became an Object of
Climate Governance”, to be published in Ecology Law Quarterly
[3] Daniel J. Weitzner, Harold Abelson, Tim Berners-Lee, Joan Feigenbaum, James Hendler, Gerald J. Sussman (2007) “Information Accountability”, Computer Science and Artificial Intelligence Laboratory Technical Report, MIT-CSAIL-TR-2007-034, June 13, 2007
[4] Business Forum for Consumer Privacy, “A New Approach to Protecting Privacy in the Evolving Digital Economy: A Concept for Discussion”, March 2009
[5] Kevin Werbach (2003), "Radio Revolution: The Coming of Age of Unlicensed Wireless," New America Foundation and Public Knowledge, no date on document, dated 15 Dec 2003 on NAF site
[6] Preston Marshall (2009) “Quantifying Aspects of Cognitive Radio and Dynamic Spectrum Access Performance” (see slides 15, 16)
[7] J Pierre de Vries, (2008) "De-situating spectrum: Rethinking radio policy using non-spatial metaphors" New Frontiers in Dynamic Spectrum Access Networks, 2008 (DySPAN 2008). http://ssrn.com/abstract=1241342
[8] David M. Friedman, Law's Order: What Economics Has to Do with Law and Why It Matters, Princeton University Press: 2001. See Chapter 7 for a discussion of ex ante/ex post.
Friday, December 18, 2009
Norms, mechanisms and policy imperatives
As I stumble towards a paper about changes in governance required by changing technology (part of the Silicon Flatirons New Models of Governance project) I’ve found Peter Cowhey and Jonathan Aronson’s magisterial new book on the political economy of global communications [1] very useful.
In the Summary and Conclusions, co-written with Don Abelson, they introduce four “principles” for market governance in the light of current conditions, and ten “norms” needed to implement the principles (see Appendix 1 below). They define market governance as “the mixture of formal and informal rules and the expectations about how markets should logically operate.”
When I look at their norms, I see a set of choices for the set-points of a small number of governance mechanisms:
My list of policy imperatives does not include subsidy, which is implied by Cowhey & Aronson’s Norm 2, “Invest in virtual common capabilities”. In the light of their work, I now realize that this is an omission; distributing government largesse is a permanent policy imperative.
The mechanisms of competition policy and property rights are means to the end of economic vitality, my fifth policy imperative. The mechanism of regulatory “touch” is a means that I address in my paper under the heading of Principles (see Appendix 2, below); as it happens, I concur with their recommendations for light touch regulation.
The difference in emphasis is perhaps most noticeable in the absence of norms/mechanism that speak to the “soft” policy imperatives. While Cowhey & Aronson’s Norm 7 addresses media content, and thus recognizes some value in “culture and values”, my third policy imperative, it is not implementable in the way the others are; it merely recommends a balance between encouraging trade and protecting cultural values. The “public safety” imperative is completely absent. While one may argue that Imperative 2, “consumer protection”, is to be achieved through competition policy (Norms 3 and 5), Cowhey & Aronson do not explicit mention of consumers.
Footnotes
[1] Cowhey, Peter F. and Jonathan D. Aronson, Transforming Global Information and Communication Markets: The Political Economy of Innovation, MIT Press (February 15, 2009). Softcopy available at http://globalinfoandtelecom.org/book/ (look for the “Download free under Creative Commons license” link)
[2] It is telling that Cowhey and Aronson seem to equate the public interest with consumer welfare, an economic construct. For example, on p. 17 they write: “The main challenge for governance is creating appropriate new spaces for market competition that allow the most important potential for innovation to play out in a manner that enhances consumer welfare (the public interest).”
[3] De Vries, Pierre, “Internet Governance as Forestry: Deriving Policy Principles from Managed Complex Adaptive Systems”, TPRC 2008. Available at SSRN: http://ssrn.com/abstract=1229482
Appendix 1: Four guiding principles and ten norms to help implement them
(Cowhey & Aronson (2009) Table S.1, p. 265
Principles
In the Summary and Conclusions, co-written with Don Abelson, they introduce four “principles” for market governance in the light of current conditions, and ten “norms” needed to implement the principles (see Appendix 1 below). They define market governance as “the mixture of formal and informal rules and the expectations about how markets should logically operate.”
When I look at their norms, I see a set of choices for the set-points of a small number of governance mechanisms:
- Subsidy (Norm 2)
- Competition policy (Norms 3, 5)
- Regulatory “touch” (Norms 1, 4, 6)
- Property rights (Norms 8, 9, 10)
- Public Safety. Protecting citizens is a primary responsibility of government.
- Consumer Protection. Policy makers take action when lawmakers conclude that commercial activity needs to be circumscribed in the public interest.
- Culture and Values. In order to protect and express a culture’s values, policy makers seek to limit some kinds of speech and promote others.
- Government Revenue. Money needs to be raised and redistributed by federal, state and local treasuries; this includes taxes, fees, levies, subsidies, and tax breaks.
- Economic Vitality. A healthy market produces goods and services that citizens value.
Now, these two lists are different in kind; Cowhey & Aronson’s norms and implied mechanisms are means, and my policy imperatives are ends. However, the mismatches are revealing.
The mechanisms of competition policy and property rights are means to the end of economic vitality, my fifth policy imperative. The mechanism of regulatory “touch” is a means that I address in my paper under the heading of Principles (see Appendix 2, below); as it happens, I concur with their recommendations for light touch regulation.
The difference in emphasis is perhaps most noticeable in the absence of norms/mechanism that speak to the “soft” policy imperatives. While Cowhey & Aronson’s Norm 7 addresses media content, and thus recognizes some value in “culture and values”, my third policy imperative, it is not implementable in the way the others are; it merely recommends a balance between encouraging trade and protecting cultural values. The “public safety” imperative is completely absent. While one may argue that Imperative 2, “consumer protection”, is to be achieved through competition policy (Norms 3 and 5), Cowhey & Aronson do not explicit mention of consumers.
[1] Cowhey, Peter F. and Jonathan D. Aronson, Transforming Global Information and Communication Markets: The Political Economy of Innovation, MIT Press (February 15, 2009). Softcopy available at http://globalinfoandtelecom.org/book/ (look for the “Download free under Creative Commons license” link)
[2] It is telling that Cowhey and Aronson seem to equate the public interest with consumer welfare, an economic construct. For example, on p. 17 they write: “The main challenge for governance is creating appropriate new spaces for market competition that allow the most important potential for innovation to play out in a manner that enhances consumer welfare (the public interest).”
[3] De Vries, Pierre, “Internet Governance as Forestry: Deriving Policy Principles from Managed Complex Adaptive Systems”, TPRC 2008. Available at SSRN: http://ssrn.com/abstract=1229482
Appendix 1: Four guiding principles and ten norms to help implement them
(Cowhey & Aronson (2009) Table S.1, p. 265
Principles
- Enable transactions among modular ICT building blocks.
- Facilitate interconnection of modular capabilities.
- Facilitate supply chain efficiency, reduce transaction costs.
- Reform domestically to help reorganize global governance.
- Delegate authority flexibly.
- Invest in virtual common capabilities; be competitively neutral.
- Use competition policy to reinforce competitive supply chains.
- Intervene lightly to promote broadband networks.
- Narrow and reset network competition policy. All networks must accept all traffic from other networks. Narrow scope of rules to assure network neutrality. Separate peering and interconnection for provision of VANs.
- Government should allow experiments with new applications.
- Create rules for globalization of multimedia audiovisual content services that encourage international trade and foster localism, pluralism, and diversity.
- Tip practices toward new markets for digital rights.
- Promote commercial exchanges that enhance property rights for personal data and mechanisms to do so.
- Users own their information and may freely transfer it.
Appendix 2: Four ecosystem management principles
De Vries (2008), Table 3, p. 26- Flexibility: Determine ends, not means.
- Delegation: Most problems should be solved by the market and civil society.
- Big Picture: Take a broad view of the problem and solution space.
- Diversity: Multiple solutions are possible and desirable.
Thursday, December 17, 2009
Polling x Lobbying = ?
Polling and lobbying are powerful factors of government that aren’t usually covered in Civics 101. Both are huge industries, and both shape the way political decisions are made. The current wave of web technology is going to create a hybrid form that will reshape politics.
According to 2002 Census data, the marketing research & public opinion polling industry as a whole had revenues of $10.9 billion; special interests paid Washington lobbyists $3.2 billion in 2008 according to the Center for Responsive Politics. Lobbying is as old as politics, but polling is relatively new (19th century), as is its premise: the importance of mass public opinion in government and diplomacy (18th century). Lobbyists are key players in Washington DC, and there’s a revolving door that moves former federal employees into jobs as lobbyists, and that pulls former hired guns into government careers or political appointments. Polling expertise is a key attribute in top political advisors, and something that politicians – and administrations – do incessantly.
The social media technologies of Web 2.0 will create a lobbying/polling hybrid and create a new political power center to rival traditional lobbying and polling. Efforts by government to solicit citizen opinion, like the Ideascale site soliciting input on the National Broadband Plan, or the Open for Questions site run by the White House, are a way for citizens to engage in little-L lobbying. These channels invite manipulation that will amount to big-L lobbying. In the same way that astroturfing co-opted grassroots lobbying, political operatives will co-opt the forms of web 2.0 citizen participation. Those who are adept at viral marketing will propel political memes into real-time polling tools in way that amounts to lobbying.
The amplification of the randomly popular that is pervasive on social rating sites like digg will infuse politics, intensifying the temptations of “poll, then decide”. We’ll also likely see something akin to the hollowing out of the media industry mid-list that The Economist charted in “A world of hits”: In movies and books, both blockbusters and the long tail are doing well; the losers are titles (and retailers) in the not-quite-so-good middle ground. Similarly, blockbuster issues will be laid on for the mass public that doesn’t care about politics (shibboleths like taxes and abortion), and niche lobbying on topics like radio spectrum, prison reform, and privacy will become even more fine-grained. Citizen publics will be important in both: as armies of computer-generated extras in the first case, and as engaged semi-experts in the second. Worthy mid-ground issues like trade, education, and energy policy will get steadily shorter shrift.
One implication is that niche topics like hunger policy shouldn’t strive to move up the charts into the middle ground – they’ll just wither there. Rather, niche players should embrace their residence in the long tail and make the most of Web 2.0 phenomena, like Polling x Lobbying, that give them direct access to the appropriate sliver of the policy making elite.
According to 2002 Census data, the marketing research & public opinion polling industry as a whole had revenues of $10.9 billion; special interests paid Washington lobbyists $3.2 billion in 2008 according to the Center for Responsive Politics. Lobbying is as old as politics, but polling is relatively new (19th century), as is its premise: the importance of mass public opinion in government and diplomacy (18th century). Lobbyists are key players in Washington DC, and there’s a revolving door that moves former federal employees into jobs as lobbyists, and that pulls former hired guns into government careers or political appointments. Polling expertise is a key attribute in top political advisors, and something that politicians – and administrations – do incessantly.
The social media technologies of Web 2.0 will create a lobbying/polling hybrid and create a new political power center to rival traditional lobbying and polling. Efforts by government to solicit citizen opinion, like the Ideascale site soliciting input on the National Broadband Plan, or the Open for Questions site run by the White House, are a way for citizens to engage in little-L lobbying. These channels invite manipulation that will amount to big-L lobbying. In the same way that astroturfing co-opted grassroots lobbying, political operatives will co-opt the forms of web 2.0 citizen participation. Those who are adept at viral marketing will propel political memes into real-time polling tools in way that amounts to lobbying.
The amplification of the randomly popular that is pervasive on social rating sites like digg will infuse politics, intensifying the temptations of “poll, then decide”. We’ll also likely see something akin to the hollowing out of the media industry mid-list that The Economist charted in “A world of hits”: In movies and books, both blockbusters and the long tail are doing well; the losers are titles (and retailers) in the not-quite-so-good middle ground. Similarly, blockbuster issues will be laid on for the mass public that doesn’t care about politics (shibboleths like taxes and abortion), and niche lobbying on topics like radio spectrum, prison reform, and privacy will become even more fine-grained. Citizen publics will be important in both: as armies of computer-generated extras in the first case, and as engaged semi-experts in the second. Worthy mid-ground issues like trade, education, and energy policy will get steadily shorter shrift.
One implication is that niche topics like hunger policy shouldn’t strive to move up the charts into the middle ground – they’ll just wither there. Rather, niche players should embrace their residence in the long tail and make the most of Web 2.0 phenomena, like Polling x Lobbying, that give them direct access to the appropriate sliver of the policy making elite.
Monday, December 14, 2009
Constructing spectrum – lessons from the history of economics
“Spectrum” is powerful construct; most people assume such a thing exists, and this assumption has regulatory consequences. But how did it come into being? The stories that some social scientists tell about the construction of “the economy” by economics provide some insight.
Timothy Mitchell, for example, argues that the economy was created by economists:
The economy is a recent product of socio-technical practice, including the practice of academic economics. Previously, the term “economy” referred to ways of managing resources and exercising power. In the mid-twentieth century, it became an object of power and knowledge. Rival metrological projects brought the economy into being. [1]
In his chapter in Do Economists Make Markets? On the Performativity of Economics, Michel Callon puts it this way: “To claim that economics is performative is to argue that it does things, rather than simply describing (with greater or lesser degrees of accuracy) an external reality that is not affected by economics.” MacKenzie argues in his chapter of the same book that the Black-Scholes-Merton options pricing model not only helped traders price something that already existed; it also shaped it, since most traders ended up using the model, prices converged to what the model predicted. [2]
In the same way, economists who treat spectrum as an asset (see my post "Property rights without assets") are not simply describing an external reality; they are bringing something into being. One of the key tools in this process is metrology: for example, the gathering of GDP data brings into being “the economy” which is reified through numbers like the GDP. In the same way, the program to make an inventory of spectrum buttresses the spectrum-is-real perspective. (More on spectrum inventories in a future post.)
Implications
World views have consequences, and thus stakeholders. Those who have a stake in the existing spectrum-based regime gain from this view; questioning the validity of “spectrum” undermines the security of their rights and privileges. This applies not only to capitalists who own spectrum licenses, but also to progressives who base their claims to government supervision of radios on the public ownership of the supposed “spectrum asset”. On the other hand, if one thinks of radio regulation simply in terms of the operating rights associated with radios, then a much more dynamic regime can be imagined – one that would benefit both political and commercial entrepreneurs. A non-spectrum world view might also be attractive to current “spectrum owners” who are discontented with their rights. [3]
The political and engineering systems that have co-evolved with the spectrum concept have specific characteristics: largely static allocations of rights to operate defined in terms of fixed frequency ranges. More dynamic approaches don’t fit nicely. For example, Preston Marshall wants to guarantee the right to operate, but not exclusivity over one channel; he proposes to guarantee a licensee (along with others) an aggregate of access to sufficient frequencies to meet a certain amount of service. [4]
This is an approach that focuses on behavior, rather than the exclusive ownership of an asset. As I argued in "Property rights without assets", this is perfectly compatible with a property rights regime, since property rights don’t have to be based on an underlying asset.
The bottom line is that a spectrum-as-asset approach leads one to ignore elements, which leads to inferior rights design. Specifically, receivers have been ignored. If one thinks one’s job is to "carve up spectrum", then you don't have to worry about receivers. But when radio is considered as a system, the receivers determine interference just as much as transmitters, so one has to take them into account explicitly. By analogy: if you're deciding a land trespass case, you don't worry about whether farmer is grazing Holsteins or Friesians. But if you're deciding a trademark dispute, everything depends on what happens in the mind of the consumer (analogous to the receiver). [5]
Footnotes
[1] Mitchell, Timothy (2008) “Rethinking Economy”, Geoforum, Volume 39, Issue 3, May 2008, pp. 1116-1121.
[2] MacKenzie, Donald A, Fabian Muniesa, Lucia Siu (2007), Do economists make markets? On the Performativity of Economics, Princeton University Press, 2007
[3] There is debate about the origin and extent of government property rights in spectrum; see for example the Introduction of William L. Fishman, “Property Rights, Reliance, and Retroactivity Under the Communications Act of 1934”, Federal Communications Law Journal, Vol. 50, No. 1. Fishman concludes: “It would probably be better, therefore, to say that the government regulates electromagnetic radiation in certain defined frequencies, rather than to say it regulates spectrum.”
[4] See e.g. Section 5.4 in the report “Radio Regulation Summit: Defining Inter-channel Operating Rules”.
[5] For more on the virtues of the radio-as-trademark metaphor, see my blog post “De-situating Spectrum: Non-spatial metaphors for wireless communication”, and paper “De-Situating Spectrum: Rethinking Radio Policy Using Non-Spatial Metaphors”
Timothy Mitchell, for example, argues that the economy was created by economists:
The economy is a recent product of socio-technical practice, including the practice of academic economics. Previously, the term “economy” referred to ways of managing resources and exercising power. In the mid-twentieth century, it became an object of power and knowledge. Rival metrological projects brought the economy into being. [1]
In his chapter in Do Economists Make Markets? On the Performativity of Economics, Michel Callon puts it this way: “To claim that economics is performative is to argue that it does things, rather than simply describing (with greater or lesser degrees of accuracy) an external reality that is not affected by economics.” MacKenzie argues in his chapter of the same book that the Black-Scholes-Merton options pricing model not only helped traders price something that already existed; it also shaped it, since most traders ended up using the model, prices converged to what the model predicted. [2]
In the same way, economists who treat spectrum as an asset (see my post "Property rights without assets") are not simply describing an external reality; they are bringing something into being. One of the key tools in this process is metrology: for example, the gathering of GDP data brings into being “the economy” which is reified through numbers like the GDP. In the same way, the program to make an inventory of spectrum buttresses the spectrum-is-real perspective. (More on spectrum inventories in a future post.)
Implications
World views have consequences, and thus stakeholders. Those who have a stake in the existing spectrum-based regime gain from this view; questioning the validity of “spectrum” undermines the security of their rights and privileges. This applies not only to capitalists who own spectrum licenses, but also to progressives who base their claims to government supervision of radios on the public ownership of the supposed “spectrum asset”. On the other hand, if one thinks of radio regulation simply in terms of the operating rights associated with radios, then a much more dynamic regime can be imagined – one that would benefit both political and commercial entrepreneurs. A non-spectrum world view might also be attractive to current “spectrum owners” who are discontented with their rights. [3]
The political and engineering systems that have co-evolved with the spectrum concept have specific characteristics: largely static allocations of rights to operate defined in terms of fixed frequency ranges. More dynamic approaches don’t fit nicely. For example, Preston Marshall wants to guarantee the right to operate, but not exclusivity over one channel; he proposes to guarantee a licensee (along with others) an aggregate of access to sufficient frequencies to meet a certain amount of service. [4]
This is an approach that focuses on behavior, rather than the exclusive ownership of an asset. As I argued in "Property rights without assets", this is perfectly compatible with a property rights regime, since property rights don’t have to be based on an underlying asset.
The bottom line is that a spectrum-as-asset approach leads one to ignore elements, which leads to inferior rights design. Specifically, receivers have been ignored. If one thinks one’s job is to "carve up spectrum", then you don't have to worry about receivers. But when radio is considered as a system, the receivers determine interference just as much as transmitters, so one has to take them into account explicitly. By analogy: if you're deciding a land trespass case, you don't worry about whether farmer is grazing Holsteins or Friesians. But if you're deciding a trademark dispute, everything depends on what happens in the mind of the consumer (analogous to the receiver). [5]
Footnotes
[1] Mitchell, Timothy (2008) “Rethinking Economy”, Geoforum, Volume 39, Issue 3, May 2008, pp. 1116-1121.
[2] MacKenzie, Donald A, Fabian Muniesa, Lucia Siu (2007), Do economists make markets? On the Performativity of Economics, Princeton University Press, 2007
[3] There is debate about the origin and extent of government property rights in spectrum; see for example the Introduction of William L. Fishman, “Property Rights, Reliance, and Retroactivity Under the Communications Act of 1934”, Federal Communications Law Journal, Vol. 50, No. 1. Fishman concludes: “It would probably be better, therefore, to say that the government regulates electromagnetic radiation in certain defined frequencies, rather than to say it regulates spectrum.”
[4] See e.g. Section 5.4 in the report “Radio Regulation Summit: Defining Inter-channel Operating Rules”.
[5] For more on the virtues of the radio-as-trademark metaphor, see my blog post “De-situating Spectrum: Non-spatial metaphors for wireless communication”, and paper “De-Situating Spectrum: Rethinking Radio Policy Using Non-Spatial Metaphors”
Thursday, December 10, 2009
Property rights without assets
I’m still nagging away at the implications of the fallacy that spectrum exists. Sorry.
I’ve been struck recently that many if not most definitions of property rights seem to turn on a relationship to an asset. For example, Gary Libecap in Contracting for Property Rights defines them as "the social institutions that define or delimit the range of privileges granted to individuals to specific assets" (1990:1); or Yoram Barzel in The Economic Analysis of Property Rights: "Property rights of individuals over assets consist of the rights, or the powers, to consume, obtain income from, and alienate these assets" (1997:2). Such definitions set out to define rights which assure the owner of an asset that they can derive value from that asset.
However, one can have rights to create value that do not require the existence of an underlying asset – unless, of course, one takes the position that the existence of a property right necessarily implies an asset. [1]
Therefore, let me distinguish between any property right, which is an asset in itself, and a property right to exploit an asset, which entails two assets: the right itself, and the underlying asset. All assets can lead to property rights – perhaps tautologically, in that something might not be counted as an asset if it does not have rights associated with it – but not all property rights require assets.
Examples
It always helps to make things concrete. One property right without an underlying asset is a New York taxi cab medallion: it's a right to operate, but there isn't an underlying asset. The right is tied to a particular place (New York), but that place isn't the asset.
Another common asset-less right is a franchise, that is, an agreement to sell a company's products exclusively in a particular area or to operate a business that carries that company's name.
Perhaps my favorite is a trademark, that is, a word, symbol, or phrase, used to identify a particular manufacturer or seller's products and distinguish them from the products of another. One might use the word “Wired” to brand a magazine, but the word isn’t the asset; when I last counted about a year ago, there were about 27 distinct trademarks using the word "wired" in the US.
Notice that permission for an agent to behave in a particular way is the essence of all these rights – and of rights that require assets, too. Therefore, I’d contend that behavior is the key to property rights, and assets are optional.
There are of course many property rights to assets, from owning a pencil to the right to extract oil in a particular region. Note that the underlying assets don't have to be tangible: an algorithm over which one has a patent is a perfectly viable intangible asset (perhaps made so exactly by the property right).
Implications
This distinction between property rights that do and do not require underlying assets matters: if one assumes an underlying asset where there is none, one is liable to over-assign rights.
For example, if trademark regulation assumed that the word being used was the asset, then it might give the owner of the trademark the right to all possible (commercial) uses of the word. There would be only one “Wired” trademark in the US, let’s say owned by Condé Nast; the companies who wanted to use the word to sell cologne, art supplies, energy drinks, stationery, electronic door chimes or automobile wheels would be out of luck. This would be a loss because an entrepreneur could apply the letters w-i-r-e-d to some new product that couldn’t be confused with a magazine without seeking (and probably failing to get) Condé Nast’s permission.
Similar reasoning applies to radio regulation. The existence of radio licenses doesn’t mean that there is an underlying asset, “spectrum”. [2]
If one regards a radio channel as an asset, then (Anglo-American) regulators have shown a proclivity to grant an expansive array of rights. Following the norm of technology and service neutrality, they have defined operating rights so broadly that pretty much preclude all operations that radiate energy in that channel, regardless of its harm to the licensee, in order to allow the licensee to operate in any conceivable way. [3] Such a broad definition forecloses new entry by potentially useful but non-interfering services
A broad definition also forecloses future arrangements of radio operating rights that are not tied to channel-based world view. Bands and channels, as regulatory constructs, are in large part a consequence of the two-stage super-heterodyne radio design that first filters a broad range of frequencies at the "RF stage", and then after down-conversion picks out a narrow range at the "IF stage". This is an old-fashioned approach that is increasingly becoming obsolete [4] - but it is enshrined in regulation.
References
Barzel, Yoram, Economic Analysis of Property Rights, Cambridge University Press 1989, second edition 1997
Libecap, Gary D., Contracting for Property Rights, Cambridge University Press 1990
Footnotes
[1] A view of property rights that does not require the existence of underlying assets is not identical to the "bundle of rights" approach taught in law school property classes; there it's taken as a given that there's an underlying asset - paradigmatically, real estate - and the bundle explains how it can be simultaneously “owned" by multiple parties.
[2] The emergence of the spectrum concept suggests that this is, indeed, the conclusion that has been drawn. Perhaps the reasoning that a property right must entail an asset is one of the reasons why “spectrum” has become such an entrenched concept.
[3] I’m ignoring allowed inter-channel interference; for a discussion of that case, see my report on the meeting held at Silicon Flatirons, “Defining Inter-Channel Operating Rules”
[4] See e.g. Soni & Newman 2009, "Direct conversion receiver designs enable multi-standard/multi-band operation", RF Designline
I’ve been struck recently that many if not most definitions of property rights seem to turn on a relationship to an asset. For example, Gary Libecap in Contracting for Property Rights defines them as "the social institutions that define or delimit the range of privileges granted to individuals to specific assets" (1990:1); or Yoram Barzel in The Economic Analysis of Property Rights: "Property rights of individuals over assets consist of the rights, or the powers, to consume, obtain income from, and alienate these assets" (1997:2). Such definitions set out to define rights which assure the owner of an asset that they can derive value from that asset.
However, one can have rights to create value that do not require the existence of an underlying asset – unless, of course, one takes the position that the existence of a property right necessarily implies an asset. [1]
Therefore, let me distinguish between any property right, which is an asset in itself, and a property right to exploit an asset, which entails two assets: the right itself, and the underlying asset. All assets can lead to property rights – perhaps tautologically, in that something might not be counted as an asset if it does not have rights associated with it – but not all property rights require assets.
Examples
It always helps to make things concrete. One property right without an underlying asset is a New York taxi cab medallion: it's a right to operate, but there isn't an underlying asset. The right is tied to a particular place (New York), but that place isn't the asset.
Another common asset-less right is a franchise, that is, an agreement to sell a company's products exclusively in a particular area or to operate a business that carries that company's name.
Perhaps my favorite is a trademark, that is, a word, symbol, or phrase, used to identify a particular manufacturer or seller's products and distinguish them from the products of another. One might use the word “Wired” to brand a magazine, but the word isn’t the asset; when I last counted about a year ago, there were about 27 distinct trademarks using the word "wired" in the US.
Notice that permission for an agent to behave in a particular way is the essence of all these rights – and of rights that require assets, too. Therefore, I’d contend that behavior is the key to property rights, and assets are optional.
There are of course many property rights to assets, from owning a pencil to the right to extract oil in a particular region. Note that the underlying assets don't have to be tangible: an algorithm over which one has a patent is a perfectly viable intangible asset (perhaps made so exactly by the property right).
Implications
This distinction between property rights that do and do not require underlying assets matters: if one assumes an underlying asset where there is none, one is liable to over-assign rights.
For example, if trademark regulation assumed that the word being used was the asset, then it might give the owner of the trademark the right to all possible (commercial) uses of the word. There would be only one “Wired” trademark in the US, let’s say owned by Condé Nast; the companies who wanted to use the word to sell cologne, art supplies, energy drinks, stationery, electronic door chimes or automobile wheels would be out of luck. This would be a loss because an entrepreneur could apply the letters w-i-r-e-d to some new product that couldn’t be confused with a magazine without seeking (and probably failing to get) Condé Nast’s permission.
Similar reasoning applies to radio regulation. The existence of radio licenses doesn’t mean that there is an underlying asset, “spectrum”. [2]
If one regards a radio channel as an asset, then (Anglo-American) regulators have shown a proclivity to grant an expansive array of rights. Following the norm of technology and service neutrality, they have defined operating rights so broadly that pretty much preclude all operations that radiate energy in that channel, regardless of its harm to the licensee, in order to allow the licensee to operate in any conceivable way. [3] Such a broad definition forecloses new entry by potentially useful but non-interfering services
A broad definition also forecloses future arrangements of radio operating rights that are not tied to channel-based world view. Bands and channels, as regulatory constructs, are in large part a consequence of the two-stage super-heterodyne radio design that first filters a broad range of frequencies at the "RF stage", and then after down-conversion picks out a narrow range at the "IF stage". This is an old-fashioned approach that is increasingly becoming obsolete [4] - but it is enshrined in regulation.
References
Barzel, Yoram, Economic Analysis of Property Rights, Cambridge University Press 1989, second edition 1997
Libecap, Gary D., Contracting for Property Rights, Cambridge University Press 1990
Footnotes
[1] A view of property rights that does not require the existence of underlying assets is not identical to the "bundle of rights" approach taught in law school property classes; there it's taken as a given that there's an underlying asset - paradigmatically, real estate - and the bundle explains how it can be simultaneously “owned" by multiple parties.
[2] The emergence of the spectrum concept suggests that this is, indeed, the conclusion that has been drawn. Perhaps the reasoning that a property right must entail an asset is one of the reasons why “spectrum” has become such an entrenched concept.
[3] I’m ignoring allowed inter-channel interference; for a discussion of that case, see my report on the meeting held at Silicon Flatirons, “Defining Inter-Channel Operating Rules”
[4] See e.g. Soni & Newman 2009, "Direct conversion receiver designs enable multi-standard/multi-band operation", RF Designline
Monday, December 07, 2009
Alfred Kahn, SURs, and new approaches to radio regulation
I have at last finished writing up a Silicon Flatirons meeting on rules for inter-channel radio interference (web page, PDF). Reflecting on the event, I was struck again by the contrast Phil Weiser noted last year [1] between the success of airline deregulation, and the halting progress in doing the same for “spectrum”.
The meeting showed there was broad support for taking receivers into account more explicitly when drafting rules, for example by regulating resulting signal levels rather than the customary approach of specifying rules for individual transmitters. This approach focuses on the results of transmission – which includes interference, the bone of contention in most radio regulation debates – rather than the transmission itself.
Ofcom, the UK communications regulator, took an interference-based approach to licensing by creating Spectrum Usage Rights, also known as SURs [2]. However, SURs were roundly rejected by the cellular operators. Ofcom chose not to impose SURs on the mobile industry, on the premise that the goal of SURs was to improve the certainty of the license holders for their benefit, not the regulator’s benefit.
While there are many other reasons for the cellcos to reject SURs (the problems SURs address, like uncertainty about likely uses and technologies, or disparate uses in adjacent channels, are largely absent in cellular bands), it is clear that Ofcom deferred to the interests of incumbents – potentially at the cost of consumers or new entrants. One of the conclusions of a 2007 report for the European Commission on radio interference regulatory models [3] came to mind:
The radio incumbents Weiser had in mind were the broadcasters and not the cellular companies – but it’s not too much of a stretch to attribute at least some of the resistance to new methods of radio regulation to the New Incumbents.
REFERENCES
[1] Phil Weiser (2009), “Alfred Kahn as a Case Study of a Political Entrepreneur: An Essay in Honor of His 90th Birthday.” Journal Network Economics, 2009. Abstract at SSRN. The paper was first delivered at a conference at Silicon Flatirons in Boulder on September 5, 2008.
[2] See e.g. William Webb (2009), “Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms” (PDF); and Ofcom (2008), “Spectrum Usage Rights: A Guide Describing SURs” (PDF)
[3] Eurostrategies and LS telcom, “Study on radio interference regulatory models in the European Community” 29 November 2007 (PDF)
The meeting showed there was broad support for taking receivers into account more explicitly when drafting rules, for example by regulating resulting signal levels rather than the customary approach of specifying rules for individual transmitters. This approach focuses on the results of transmission – which includes interference, the bone of contention in most radio regulation debates – rather than the transmission itself.
Ofcom, the UK communications regulator, took an interference-based approach to licensing by creating Spectrum Usage Rights, also known as SURs [2]. However, SURs were roundly rejected by the cellular operators. Ofcom chose not to impose SURs on the mobile industry, on the premise that the goal of SURs was to improve the certainty of the license holders for their benefit, not the regulator’s benefit.
While there are many other reasons for the cellcos to reject SURs (the problems SURs address, like uncertainty about likely uses and technologies, or disparate uses in adjacent channels, are largely absent in cellular bands), it is clear that Ofcom deferred to the interests of incumbents – potentially at the cost of consumers or new entrants. One of the conclusions of a 2007 report for the European Commission on radio interference regulatory models [3] came to mind:
“Technology and service-neutral licensing (as would be supported by interference-based licensing techniques) offers significant benefit for end-users but not necessarily for spectrum owners and network providers.”In an essay in honor of Alfred Kahn’s 90th birthday, Phil Weiser observed that airline regulation (where Kahn, the "Father of Airline Deregulation," made his name) and spectrum regulation share some basic characteristics: both regimes emerged from an effort to protect established interests; both limited output by restricting the use of the resource in question; and in both cases, early academic criticism calling for regulatory reform went unheeded. In making the case for Kahn as a political entrepreneur, Weiser argues that he “pursued the objective of eroding the airline industry’s commitment to the legacy regulatory regime by both undermining the manner in which it protected established incumbents and bolstering the strength of those interests that would benefit from deregulation.”
The radio incumbents Weiser had in mind were the broadcasters and not the cellular companies – but it’s not too much of a stretch to attribute at least some of the resistance to new methods of radio regulation to the New Incumbents.
REFERENCES
[1] Phil Weiser (2009), “Alfred Kahn as a Case Study of a Political Entrepreneur: An Essay in Honor of His 90th Birthday.” Journal Network Economics, 2009. Abstract at SSRN. The paper was first delivered at a conference at Silicon Flatirons in Boulder on September 5, 2008.
[2] See e.g. William Webb (2009), “Licensing Spectrum: A discussion of the different approaches to setting spectrum licensing terms” (PDF); and Ofcom (2008), “Spectrum Usage Rights: A Guide Describing SURs” (PDF)
[3] Eurostrategies and LS telcom, “Study on radio interference regulatory models in the European Community” 29 November 2007 (PDF)
Subscribe to:
Posts (Atom)