What characteristics (if any) of 21st century communications justify a change in methods of governance?
Any change in policy has unintended consequences; some of them will be adverse. One has to think carefully before advocating radical change: the benefits of change or the costs of doing nothing should be substantial. One way of beginning a cost/benefit analysis is to understand the underlying forces.
Many arguments have been given for new internet regulation. Cowhey and Aronson (Transforming Global Information and Communication Markets 2009:17) cite three factors that will force change: the modular mixing and matching of technology building blocks; the need to span traditional policy and jurisdictional divides (aka Convergence); and the need to rely more on non-governmental institutions to coordinate and implement global policy. In my paper “Internet Governance as Forestry”, I cite three characteristics of the internet that require new responses: modularity, decentralized self-organization, and rapid change.
Let’s consider, then, the following candidates for radical, unprecedented and transformational change in the internet economy taken from these two lists: modularity, convergence, the “third sector”, decentralization, and rate of change.
I doubt modularity will persist as a characteristic of the internet business. While it is clearly a hallmark of our current stage, it has a long history: the standardization of interchangeable parts is dated back to Eli Whitney’s process for manufacturing muskets for the US government in 1798, but there is evidence for standardization of arrowheads and uniform manufacturing techniques in the bronze age, and some anthropologists claim there was standardization of stone age tools. However, modular technology does not lead inescapably to a modular industry structure. Standard parts have not rendered pre-internet industries immune to anti-trust problems, and it is likely they will do so now. The role of modularity in the relationships between companies waxes and wanes, depending on rather than driving industry consolidation and market power.
The good old convergence argument is a true enough, but tired. The mixing of broadcasting, telecom and intellectual property regulation brought about by common digital formats will undoubtedly require a huge amount of creative reform of regulation, but I no longer think that the result will be the abolition of regulatory categories based on the commercial and technological status quo.
I would very much like to see such an abolition; I proposed a re-organizing the FCC by policy imperatives rather than industry categories in my FCC Reform paper, but I don’t think it’s going to be practical. The human rage to classify  will reassert itself. Classifying by policy concern probably won’t work, sad to say, because of how regulation tends to work: take a new problem, fit it into an existing category, and apply the rules of that category. Even if this mechanism yields weird results in times of transition, it’s usually efficient and is likely to persist, even as categories change. We don’t yet have the new categories, but they may well emerge based more on how industry self-organizes than by logic. Judging by today’s behemoths, they might perhaps be networks, cloud services, devices and content (i.e. AT&T, Google/Microsoft, Apple/Dell and Hollywood) replacing broadcasting, telecom, cable and intellectual property (ABC/CBS/NBC, the old AT&T, Comcast and Hollywood).
The internet is no doubt much more decentralized than its forebears, e.g. the telephone network; it is by definition an affiliation of many networks, and a lot of processing is done “at the edges” rather than “in the middle”. There is a linkage between a decentralized architecture and modularity. Modularity allows decentralization, and is amplified by it. If or when either regresses to the mean, the other will tend to do so as well. Since I don’t believe that a high and increasing amount of modularity is an persistent attribute of the 21st century communications industry, I don’t believe that high and increasing decentralization is either. However, the current degree of modularity and decentralization in has probably put us into a qualitatively different regime; there has phase change, so to speak. The polity has just begun to work through the implications, and this will take a decade or more.
The “third sector”: Non-Governmental Institutions (NGOs), non-profits and civil society
Cowhey and Aronson’s interest in NGOs is based in trade, and the organizations they have in mind (ICANN, W3C, IETF) meet the four-part definition offered by Lester Salamon, a political scientist and scholar of US non-profits at Johns Hopkins: they are organizations, i.e., they have an institutional presence and structure; they are private, i.e., they are institutionally separate from the state; they are fundamentally in control of their own affairs; and membership/support is voluntary. Salamon argues that the prominence of NGOs represent an “associational revolution”. I cannot judge whether this phenomenon is transient or not; however, the large organizations clearly provide an alternative venue for governance. For example, Cowhey and Aronson argue that the IETF’s central role in internet standards came about because the US Government decided to delegate authority to it.
If one relaxes the requirement for formal institutional structure, the rise of private, voluntary engagement in politics facilitated by Web 2.0 represent an impetus and perhaps even a venue for new governance. Currently fashionable examples include http://transparencycorps.org/, http://opengov.ideascale.com/ and http://watchdog.net/; tools that facilitate engagement include http://www.opencongress.org/, http://www.opensecrets.org/lobbyists/ and http://www.govtrack.us/. The citizen’s ability to know about the activities of their legislators and petition has never been greater; tools for organizing into ad hoc coalitions (most famously the role of http://www.meetup.com/ in the 2004 and 2008 US campaigns) lead to a ferment of groups that may grow into more recognizable institutions. Policy makers will have to invent new ways to track and mollify these groups, at the very least; the Obama Administration appears to be using them to support policy making.
While the decentralized architecture of the internet and the rise of NGOs are different phenomena with different causes, Web 2.0 technologies are beginning to draw them together.
Rate of change
As to whether the rapidity of change is transformative and permanent, I think the answer is No and Yes. The rate of technical and commercial innovation on internet over the last two decades has been stunning. It has been abetted by modularity, and even more so by the ability of software to morph without having to retool a factory. (Retooling a code base is a non-trivial exercise, though.) However, the internet is growing up and it’s reasonable to expect that the industry and technology will settle into a phase of relative maturity. 
On the other hand, while the rate of change may not continue to accelerate, or even continue at its current pace, the political system has to adjust to the stresses that the increase to date has already imposed. William Scheuerman, for example, argues that the “social acceleration of time” has created a profound imbalance between the branches of government in liberal democratic systems like the US.  Even if the rate of techno-commercial innovation slows down, the rate at which global markets generate and propagate news will be a challenge for political systems whose time cycles are set in constitutions that change only very slowly, and human physiology which changes hardly at all. 
Back to Hard Intangibles
A change in context that forces a change in governance doesn’t need to be irreversible for the consequences to be profound. Since history is cumulative, a “phase change” in policy making is a change that never really reverts to its prior form, since the context changes with it. However, some changes are more portentous than others. I’ve argued above that the modularity, convergence and decentralization of the internet are temporary, and part of the regular cycle flow in industry structure. Changes in tempo and the rise of the third sector seem to me to be more momentous. I think both are rooted in the growing intangibility of our societies, which has been accelerated by ICT: complex software running on powerful processors linked by very fast networks.
I think there is a link back to my 2006/2007 obsession with “hard intangibles” (DeepFreeze9 thread). The ability to compose more components than the mind can manage makes programming/debugging very hard, particularly when those components are so easily mutable: it’s easier to change a line of code than to retool an assembly line. The “soft products” of these technologies, themselves complex, composable and mutable become the inputs for culture and thus policy making: it’s easier to change web artifacts and social networks than to manage a movement using letters and sailing ships.
 I first heard the term used by Rohan Bastin, Associate Professor of Anthropology at Deakin University, in a Philosopher’s Zone interview about Claude Levi-Strauss. “The human rage to classify” is also a chapter title in F. Allan Hanson, The Trouble With Culture : How Computers Are Calming The Culture Wars, SUNY Press 2007
 This prediction contradicts Ray Kurzweil’s contention that technological change accelerates at an exponential rate, and will continue to do so: his “Law of Accelerating Returns” [link, critique]
 William E. Scheuerman, Liberal Democracy and the Social Acceleration of Time (2004). Scheuerman defines social acceleration of time as “a long term yet relatively recent historical process consisting of three central elements: technological acceleration (e.g. the heightening of the rate of technological innovation), the acceleration of social change (referring to accelerated patterns of basic change in the workplace, e.g.), and the acceleration of everyday life (e.g., via new means of high-speed communication or transportation).” I’m indebted to Barb Cherry for introducing me to Scheuerman’s ideas; see e.g. her “Institutional Governance for Essential Industries Under Complexity: Providing Resilience Within the Rule of Law” CommLaw Conspectus 17.1
 Human thinking won’t speed up much, if at all – though tools can make it look as if it does. See for example the Edwin Hutchins’ wonderful Cognition in the Wild (1996). Hutchins contends that we need to think in terms of “socially distributed cognition” in a system that comprises people and the tools that were made for them by other people.