Monday, May 21, 2007

Talk on Hard Intangibles

My recent lecture on Hard Intangibles at the Annenberg Center for Communication at USC (3 May 2007, abstract) has now been posted in a choice of formats:

Audio: MP3
Video: QuickTime or Windows Media

Tuesday, May 15, 2007

A modular net

I concluded in 2005 that modules were a better way to think about the Internet than layers. A book draft by Peter Cowhey, Jonathan Aronson and John Richards has stimulated me to revisit my thinking, since they point out that modularization is a key characteristic of the new evolving Internet.

The problem with Layers

“Layers” were an analytical response to the breakdown of telecoms silos brought about by convergence (Werbach 2002, Whitt 2004). Since the “vertical” silos were merging, a “horizontal” approach seemed like a more useful abstraction, since it would remain relative constant even while technology and business model miscegenation raged between the silos.

Like any model, though, Layers has its limitations. First, layers are premised on a technological abstraction that was honored more in the breach than the observance: after learning about layers in Networking 101, engineers spend the rest of their career using cross-layer violations to improve system performance

Second, there isn’t a single “up direction” for stacking layers. The metaphor of layered stuff presumes gravity, which provides a direction for stacking. Even assuming one can discern an up direction in the network stack (e.g. by referring to successive encapsulation: the network stack is more like Russian Dolls than Legos), there are other important dimensions orthogonal to the network stack. For example, the supply chain from raw material to final product (from up-stream to down-stream, in the lingo) represents an important dimension. We all instinctively use vertical and horizontal as categories, but metaphors of the network stack and supply chains work at cross purposes: upstream in the network business is at the bottom of the stack, and downstream is at the top.

The upstream/downstream dimension (i.e. suppliers/customers) can be contrasted with entrants/substitutes in Michael Porter’s five forces analysis of industry dynamics. The interactions between these forces, as well as competition within an industry, providers of complementary products, and governments, obviate a simple layers model for an industry in general.

Geography and hierarchy are two more dimensions:

  • geography: local, middle mile, long-distance; also related is IXPs where ISPs exchange traffic
  • hierarchy: local, regional, national and international switches
Geography and hierarchy are closely related. However, since one can provide non-local communication without hierarchy, e.g. by using mesh architectures, they’re not identical.

The silo and layer models are attractive because they’re one-dimensional: a single parameter serves to distinguish between categories. In the silo model, the parameter is end-user service (broadcast TV, telephony, satellite communications, etc.); in the layer model, it’s the degree of network abstraction between the physical transmission of data and the end user experience.

Both these parameters (end user experience, and technical architecture) are relevant, so neither silos nor layers are sufficient on their own. Near to the physical layer, a packet is a packet is a packet, and end-user experiences can be more easily ignored; but from the user perspective, watching video is different from voice communications. Layers are helpful low in the stack, but the particular public interest mandates applied to TV are different from those on telephony, and so service silos can’t be ignored. The argument is at its most complex in the middle, where the two blur into each other.

Modules and Modularity

It’s easy to invoke modules and get knowing nods, particularly among techies, but coming up with a comprehensive defintion is tricky. To start concretely, here are some examples of things I count as modules:

  • a network connection of whatever kind, e.g. Wi-Fi access to a base station, a wired thernet connection, a 3G cellular data service
  • directories of all kinds, from the DNS to sites that organize links to other resources, like alluc.org’s pointers to videos
  • a web browser, particularly one which runs on a variety of operating systems, in which an endless variety of services can be run
  • an IM client, particularly on which runs against a variety of IM back-ends, e.g. Trillian plug-ins like web page hit counters, Netvibes widgets, and MySpace templates local and long-distance phone service provision
  • voice over IP
  • A Google AdSense plug-in on a web page

I don’t count these as modules:

  • Cable or satellite TV service (I buy it in a take-it-or-leave-it lump)
  • plain ol’ telephone service (before the AT&T break-up into local and long-distance, and before modems came along)

I’m unsure about some cases. For example, before the advent of plug-ins and RSS, portals like AOL, Yahoo and MSN allowed some on-site personalization, but were pretty much as you found them. You couldn’t mix in third party components to change them, which disqualifies them from modulehood in my book.

With these as a basis, I’d define a module as an interchangeable part of a larger collection of components that delivers an ICT user experience. “Modularity” is the design philosophy which builds functionality out of partial, separable and substitutable components, the modules. The key attributes of modules are:

  • partial – the module is not sufficient on its own to provide a complete user experience; it’s a sub-set of the entire thing. The end user needs to assemble two or more modules to create the result they seek, like combining a local and long-distance phone service provider in the US
  • separable – a module is self-contained and detachable, e.g. a web hit counter or other web plug-in can be removed without affecting functionality of the rest of the page
  • substitutable – a module can replaced by another, equivalent one from another supplier, like replacing one web browser by another one.

Substitutability requires some public disclosure of the interface between modules. This leads to the Farrell & Weiser (2003) definition: “Modularity means organizing complements (products that work with one another) to interoperate through public, nondiscriminatory, and well-understood interfaces.” Note, though, that substitutability is not a sufficient condition; it presumes that an architecture of separable and partial pieces already exists.

Some user-facing modules have enough heft to qualify as applications, though in a modular world they build on some modules, and host others. MySpace or Netvibes are apps, but they plug into a browser, and their functions are extended by other plug-in modules.

Governance

Silos and layers provide straightforward ways to define markets, which can then be used to decide antitrust questions and figure out which groups of players should bear public interest mandates. They were designed to work this way. Modules don’t have this property.

Antitrust remedies are premised on well-defined markets within which companies compete. It’s hard to use modules to define markets, since players can mix and match the modules they use to offer a service, potentially working around a provider with market power. This is good news, of course; antitrust remedies wouldn’t be needed if it’s impossible to put together a platform that forms a bottleneck in the supply chain in a modular world. However, one should Never Say Never; the debate over the proposed Google/DoubleClick acquisition shows that bottlenecks could arise in the Web 2.0 world, too.

There’s also the question of public interest regulation. Universal Service Fund obligations is a well-worn topic, and won’t go away. However, arguably the harder questions revolve around public safety (CALEA, 911, pedophiles), content (obscenity, cultural protection), and access beyond connectivity (e.g. access for the disabled). Who should be responsible for delivering on these mandates, and how? Both silos or layers made it easy: pick a slice, and impose a mandate on companies in that segment. What does one do when not-quite-equivalent end user experiences can be assembled with widely different sets of modules?

My working hypothesis is that regulation should apply to the capabilities that are exposed, not the means by which they’re delivered (e.g. if something’s functionally equivalent to telephony, 911 applies regardless of how it’s done or by whom). Matters are complicated, though, because the context in which a capability is delivered makes a difference (e.g. a voice chat on X-Box Live during a game is different from voice communications module embedded into an employee’s work desktop).

Modules further complicate matters when an end-user builds up an experience by using modules from different providers. For example, imagine a visually disabled person builds a portal on Netvibes with newsfeeds from various web sites, an IM plug-in from one player, and a voice module from another – who’s responsible for delivering accessibility functions?

Odds 'n' Ends

If all modules are one-way pluggable, that is, they form chains without loops, then one can recover a layered categorization. For example, a CNET news feed plugs into my Netvibes page, which plugs into my browser; but CNET feed doesn’t itself host a Netvibes portal. (I feel in my bones that there must be module loops, but I haven’t come up with any yet.)

Modules relate to the interconnection, defined as connections between networks. Interconnect requires “horizontal pluggability” between modules, that is, pluggability among similar modules. The various network transport providers are at the same level of the network stack (i.e. the in- and out-connections use the same protocols) but may be at different geographical and hierarchical levels (e.g. local and long-distance). By contrast, the plug-ins for competing RSS viewers like Netvibes and Google Reader are neither interchangeable across platforms, nor do they directly connect to each other.

Lecture on Hard Intangibles

The Annenberg Center for Communication at USC has posted my lecture on Hard Intangibles (abstract), presented on 3 May 2007, in a choice of formats:

Audio: MP3
Video: QuickTime or Windows Media

Sunday, May 13, 2007

The Perils of Plumbing Parables

No, this story is not about Senator Stevens’ tubes. Jonah Lehrer in The Frontal Cortex links to a Slate story by Darshak Sanghavi on the perils of pluming analogies when thinking about heart attacks.

“It turns out there's a right and wrong place for the plumbing analogy. It's right for people who have heart attacks that involve a sudden, total blockage of a coronary artery. That's why procedures to unclog arteries with expandable stents and balloons ("angioplasty") save lives in emergencies and need to be used more in that setting. But the plumbing analogy fails when applied to stable, partial blockages that don't lead to sudden heart attacks. And yet doctors can't let go of the plumbing talk, and they keep unclogging partial blockages. That's why the vast majority of angioplasties are done for the wrong reasons—that is, for prevention, not acute treatment.”

Ironically, Sanghavi quotes one of his sources invoking a metaphor to explain why preventative angioplasties don’t work: “The trigger isn't bad plumbing—but something more akin to a land mine. People at risk of heart attacks have largely invisible cholesterol plaques throughout their arteries, which act, he says, like unpredictable "little bombs that blow up suddenly and cause a sudden and devastating blockage" in previously healthy-appearing areas.”

More proof, if it were needed, that both lay people and professional decision makers use metaphors to make sense of complex topics, and that models can lead to bad decisions.

Saturday, May 12, 2007

Fingercerting: an alternative to DRM or collective licensing

There was good news for Audible Magic yesterday when MySpace announced that it would use their software to filter out uploads that infringed copyright. Recognizing media clips using fingerprinting (more) has become fashionable as content owners begin to sue hosters.

Fingerprinting, combined with digital certificates, offers a way around the drawbacks of two currently favored ways to govern digital media use. I will focus here on video, since I recently attended a workshop on the future of video copyright at the USC Annenberg Center.

The core of the digital copyright problem is reconciling two valid interests:

Interest 1: Creators’ need to be compensated in order to cover costs and encourage more creation.

Interest 2: Consumers’ ability to make copies of copyrighted material under limited circumstances (loosely, “fair use”).
Here are two fashionable approaches to solving this problem. Each is biased to addressing one of these interests, while ignoring the other.

Solution 1: DRM

In this model, content is locked by DRM under terms specified by the creator/distributor. The consumer can only get access by observing these rules; circumvention is prevented (in the US) by the reverse-engineering terms of the DMCA.

A major difficulty arises in the intersection of DRM with fair use, since the criteria for fair use cannot be encoded in machine-executable form. Thus, Interest 2 above is generally not respected. Other difficulties include the vulnerability to a single hack that puts a piece of content in the clear, particularly if hosted off-shore; the anti-trust consequences of Content/CE/IT standardization; and usability problems with consumer experience.

Solution 2: Collective Licensing

In this model, ISPs would pay a monthly license fee on behalf of each subscriber. This would then be distributed among rights holders a la BMI/ASCAP (cf. EFF’s proposal for music).

A difficulty arises because content creators lose the ability to negotiate their compensation with consumers, thus undermining Interest 1. Owners would be compensated on the basis of some rigid formula determined by the collecting agency. Other difficulties include deriving a formula, since video isn’t as homogeneous as music; anti-trust issues in a collection monopoly; and charging users on enterprise rather than consumer networks. Option 2 also implicitly assumes that DRM is outlawed; if it were allowed to remain, then content creators could get two bites of the apple.

Another way: Fingerprinting + Certificates = FingerCerting

Option 1, the DRM approach, puts the control of content on the user’s device; however, the control is draconian and makes accepted uses like sharing around a user’s personal domain or fair use clumsy at best. Option 2, collective licensing, removes content control by levying a blanket license fee on all broadband subscribers through their ISP, but at the cost of creating an inflexible collecting monopoly and outlawing DRM.

In the “FingerCert” approach, fingerprinting is used to identify content, and an accompanying digital certificate (or “cert”) indicates that the owner has approved its transmission. If the content is registered as copyrighted but not accompanied by a valid digital certificate, an intermediary (ISP or hoster) is obliged to block it. There can still be a negotiation between an owner and a purchaser, but DRM isn’t required, only attaching a cert to indicate a contract. Once the media has been delivered, the cert can evaporate. If the media is provided without encryption, the end user can make copies for fair use without having to worry about arcane and unexpected restrictions.

The big problem with digital media is not personal copies; it’s large-scale illegal distribution. Content owners could use light-weight DRM as a “bump in the road” to mark their rights, but heavyweight (and futile) restrictions intended to prevent even a single hack won’t be necessary. This means a good experience for the vast majority of users who are happy to pay for content, but who would be deterred from buying if DRM were rigorous enough to persuade content executives that their assets were protected against all possible infringement. If you’re only willing to sell sandwiches wrapped in bank vaults, you won’t sell many sandwiches. FingerCerting prevents large-scale distribution by stopping the flow across the Internet, not in someone’s house or between friends’ iPods; it addresses thepiratebay.org and AllofMP3.com, not somebody making a mash-up for their friends. The gates don’t have to be in many places – just the major intersections, like big content sites, or perhaps just at the major IXCs.

FingerCerts gives content owners a way to control distribution of their content (protecting Interest 1), while allowing them to do so without harsh DRM that undermines fair use copying (protecting Interest 2).

What Fingercerting Isn’t

FingerCerting doesn’t require watermarking, that is, embedding (often hiding) a copyright notice in a file. Fingerprinting sets out to recognize the file from its visible characteristics. Watermarking, just like fingerprinting, has to be keep working even when videos are manipulated, e.g. by cropping or transcoding. My uneducated guess is that fingerprinting is more robust in these cases than watermarking since it’s not trying to hide the indicia.

FingerCerting doesn’t require DRM, but neither does it preclude it. It creates an environment where DRM isn’t essential to protecting mass abuse of copyright, and hopefully takes the sting out of the argument over this technology.

Challenges

Any solution to a complex problem will have weaknesses. Here are some I can think of regarding FingerCerting:

Will you need a standard for fingerprints? Audible Magic has a mechanism to register media and recognize clips; so do other companies like Philips. Cert standards exist, but one can imagine different content owners using different solutions. The complexity may be too great for intermediaries if they have to support more than a small number of mechanisms.

Packet inspection technologies to do stream identification are available. Attaching certs to streams is a different issue; I can imagine solutions, but I haven’t stumbled across any yet. Pointers, please.

False negatives – not recognizing an illegal file or stream – will occur, but that’s OK; large scale distribution can stopped since intermediaries will have multiple shots at catching streams. The bigger problem is false positives, that is, when an intermediary mistakenly blocks content. This will annoy users, and present a wonderful scenario for denial of service attacks.

Content hosters/routers will have to be motivated, by litigation or legislation, to implement such a scheme. Current US law provides a disincentive to implementing fingerprinting: Google/YouTube would rather not know that it’s hosting infringing content, because that increases its liability under the DMCA. I presume some legislation or regulation would be required to set up the incentives for a fingercerting process; I don’t know if it will be more or less onerous than that required for DRM (cf. the DMCA) or for collective licensing.