Saturday, July 02, 2005

The social contract of new technology

Donald Bruce has used the example of genetically modified food (GM food, for short) to illustrate how the notion of a social contract can be used to build a shared vision for risky new technologies [1]. I will simplify his model and show how it can be used to assess the risk profile of new information and communication technologies.

Dr Bruce argues that the conflict about GM food in Europe is rooted in a lack of trust between the biotech industry and their customers. Many consumers don’t see the benefits of GM food, are concerned about the long-term impacts of “meddling with nature”; they that industry is pursuing its own interests over theirs. Industry and its allies believe that consumers are uninformed and irrational, and are slowing down the introduction of a technology that has widespread social benefits.

Bruce argues that a dozen parameters, listed in [1], influence whether society will embrace the benefits of a new technology, and accept its risks and disruptions. I’ve simplified this list into three themes: Why do it? Who’s in control? How do I feel about it?

The first theme (Why do it?) concerns the value proposition: what benefits are being offered at what risk?

Benefit: does it offer realistic, tangible benefits to the consumer?

Risk: how often can a bad consequence be expected, what’s the likely magnitude, and is it noticeable or insidious? [2]

The second theme (Who’s in control?) addresses questions of power: If consumers don’t feel in control of the technology, they are more likely to resist it.

Choice: is it a voluntary risk assumed by consumers, or one imposed by another party?

Trust: if imposed by someone else, how much do we trust those who are in control? [3]
The third theme (How do I feel about it?) concerns attitudes and reputation:

Values: does the technology uphold or challenge basic values?

Familiarity: is the technology familiar and understood, and socially embedded?

Precedent: if it is unfamiliar, has something like it gone wrong before or proved reliable?

Profile: has it been given a positive or negative image in the media?

All these considerations are more or less subjective. Different people weigh risks and benefits in different ways; some are more comfortable ceding control to companies than governments, or vice versa; different people have different basic values; and something that’s familiar to me may be alien to you. The adoption of a technology is not simply, or even not largely, a technical matter [4]. If the makers of technology ignore this, their initiatives are at risk.

Technologists and business leaders often live in a very different world from their customers, and stopping to listen to the unwashed masses is hard for both geeks and execs. The news media are a useful resource but are often discounted, discredited or ignored since they are seen as biased bearers of bad tidings; in fact, they may simply be representing the interests of a broader cross section of society.

The informatics industry is in the fortunate situation that it hasn’t experienced the melt-down of confidence that GM food has suffered in Europe. Hence, one doesn’t need to use a social contract analysis to figure out how to build a positive shared vision of technology, as Donald Bruce has done for biotech. However, there are deep similarities. The positive self-description of biotech noted by Bruce is similar to informatics’ self-image: discovery, innovation, enhancement, efficiency, prosperity, and growth. And as he says of biotech, "Underlying all, and largely taken for granted, is an Enlightenment vision of rational human progress through technology."

Still, many new information and communication technologies are at risk of social conflict. This approach offers a useful checklist for assessing those risks. I’ll give two examples of how this tool could be used. I leave as an exercise its application to more contentious topics like uniform software pricing in rich and poor countries, software patents, and the misuse of email and browsers to commit identity theft.

Preventing piracy through digital rights management technology (DRM): three thumbs down

Why Do It? The benefits to consumers of DRM are not tangible; it presents itself as an inconvenience. Creators assert that the flow of content will dry up without rights control technologies to protect their investment, but this loss to consumers won’t be immediately visible. The risk of losing rights to copy which have become customary with analog technologies is much more easily grasped.

Who’s in Control? The owners of content are clearly calling the shots, though the providers of the underlying tools are also implicated when consumers confront this technology. The customer has little choice but to accept DRM when it is imposed, and finds it infuriating when, say, some CDs don’t play in their PCs. Consumers are unlikely to feel they have much in common with corporate giants like Time Warner and Microsoft, and trust will be low.

How do I feel about it? While the technology upholds traditional values like not stealing, a new set of values is emerging that finds nothing wrong in freely sharing digital media. The technology is unfamiliar and hard to use, and the precedent of the failure of copy-protecting dongles once used with computer software is not encouraging. The public profile of the technology is still up for grabs; the mainstream media have yet to define an image either positive or negative.
Voice over IP (VoIP): three thumbs up

Why Do It? The benefits are immediate and tangible: phone calls cost less. A notable risk is that a call to the fire brigade or ambulance won’t go through. This is a low-frequency, high-magnitude risk, and is thus getting a lot of coverage in the press. However, it’s an understandable and mitigatable risk for most people. VoIP’s threat to the social revenue base built on legacy taxes is a long-term and esoteric risk; few consumers understand this impact, and are likely to discount it. The main risks associated with the Internet, identity theft and harm to children, are not obviously associated with VoIP.

Who’s in Control? The consumer is in charge of deploying this technology for their own use. Risks like failed emergency calls are taken on voluntarily (though one can argue about education, notice and choice). Customers have to trust their Internet service providers, but not in any unusual way. As an Internet technology, VoIP also partakes of the halo of citizen empowerment that the web has acquired.

How do I feel about it? The auguries are good on this score, too. The technology builds on the commonly held belief that the Internet empowers individuals, and offers cheap and useful new products. The technology is familiar, since it resembles traditional telephony; for those who have some on-line experience, Internet Voice services like Skype resemble the known technology of Instant Messaging. There is no widely held precedent of something like this having led to disastrous consequences and the media profile is mixed to positive.
----- ----- -----

[1] Donald M Bruce, A Social Contract for Biotechnology - Shared Visions for Risky Technologies? I found this a very useful and thought-provoking document. I do get a little uneasy, though, whenever someone ascribes opinions and motives to "people" or "the ordinary public"; there’s a narrow line between being an advocate and being patronizing.

I was alerted to the fascinating work done by Dr Bruce at the Society, Religion and Technology Project of the Church of Scotland by an opinion column that he wrote in the New Scientist of 11 June 2005, Nanotechnology: making the world better?

[2] The psychology of risk aversion plays an important role here. When facing choices with comparable returns, people tend chose the less-risky alternative, even though traditional economic calculation would suggest that the choices are interchangeable. When making decisions under uncertainty, people will tend to take decisions which minimize loss, even if that isn’t the economically rational behavior. Consequently, there is a greater aversion to high consequence risks, even if their likelihood is small. See for definitions and links.

[3] We trust another party another party if we believe that they will act in our best interests in a future, often unforeseen, circumstance. Trust is in large part a matter of a shared vision: how much do we share their values, motivations and goals? Vision is often expressed as a projection of the consequences of a set of perceptions about current situations and trends. This projection is driven by the values held by the visionary; if the values are not aligned, then the vision will not be persuasive. It’s a three-stage process in which perceptions are modulated by values to produce a vision: Perception -> values -> vision.

[4] I blogged at some length on this topic last week, under the heading Technology isn't Destiny.

No comments: