Monday, February 19, 2007
Rushfield tells the story of the "Bride Has Massive Hair Wig Out" viral video which, it turned out, was a promo by hair products company Sunsilk; and the story of Digg's admission that its ranking is being undermined by people trying to game the system. Deception and manipulation weren't introduced by the web, but they have been accelerated by it. The video looked hand-made, but was an ad; the scale of the Internet vaulted it onto the TV talk shows in a matter of weeks. The mutability of digital media made it impossible to tell whether it was produced professionally or by amateurs. The Digg story hinges on the site's algorithm; founder Kevin Rose has reassured us that everything is under control, and that we can trust his company's proprietary and hidden ranking algorithm.
Sunday, February 18, 2007
According to Alexander, planned cities are “trees,” mathematically speaking, and natural cities are “semi-lattices.” Semi-lattices contain overlapping units; “trees” and planned cities, do not. For example, the center of Paolo Soleri’s Mesa City (illustrated in the article) is divided into a university and a residential quarter, which is itself divided into a number of villages, each again subdivided further and surrounded by groups of still smaller dwelling units. Semi-lattices are more complex than trees:
“We may see just how much more complex a semi-lattice can be than a tree in the following fact: a tree based on 20 elements can contain at most 19 further subsets of the 20, while a semi-lattice based on the same 20 elements can contain more than 1,000,000 different subsets.”Alexander’s claim is not entirely convincing – for example, he never shows that natural cities are semi-lattices – but I’m persuaded that the organic old cities we love are more complex than ones that spring fully formed from the head of an architect.
There are often disconnects between what we can recognize as good, and what we can make ourselves. Most people can appreciate skillful violin playing, but can’t do it themselves. Natural cities demonstrate that we can, collectively and over time, produce semi-lattice artifacts. (I’ve claimed that social processes are a way of dealing with problems that are to big to fit into individual brains.) Natural cities just feel right, but when even the greatest minds design a city, they resort to nested hierarchies: a tree structure. Perhaps humans can’t, through conscious intellectual effort, make a persuasive structure with a semi-lattice’s overlapping intricacy.
The cognitive capacity limitations of individual brains may mean that we can’t keep track of enough units concurrently to make generate complexity. Given the limitations on single brains, we have to use simpler rules, like trees, with less satisfactory but more controlled results. This may be why hierarchies are so deeply embedded in software:
“A city may not be a tree, as Alexander said, but nearly every computer program today really is a tree – a hierarchical structure of lines of code. You find trees everywhere in the software world – from the ubiquitous folder trees found in the left-hand pane of so many program’s [sic] user interfaces to the deep organization of file systems and databases to the very system by which developers manage the code they write.” [Scott Rosenberg, Dreaming in Code, p. 185.]Simple rules can sometimes yield complex results, which is why fractals fascinate us. But while the Mandelbrot set might be pretty, it doesn’t speak to the heart in the way that a great city does. We can see that our minds and machines are inadequate, but we can’t as a matter of individual effort do what is required, which is (if Alexander’s right) to conjure semi-lattices. 
Complexity isn’t always attractive. Simplicity is appealing when it is important to understand something, that is, when we need to fit something inside our brain. Richard Stiennon made a powerful case that Windows is harder to secure than Linux by showing two pictures. The economy (and tree-like structure) of the Linux call diagram argues strongly that it is a more intelligible, and thus more easily securable, system.
 There seems to be a connection with the P =? NP problem which, loosely stated, asks if is always easier to demonstrate that a given solution is correct, than to find it. Many (most?) computer scientists think that P≠NP. The intuitive rightness of this unproven conjecture resonates with the fact that so often we can appreciate a thing of beauty, but can’t make it ourselves.
Tuesday, February 13, 2007
If I just added them, the list would grow to seven – which may be George Miller’s Magical Number, but is too many for me to remember:
- Brittleness (rigidity, fragility)
- Abundance (intangibility, copyability, non-rivalry)
- Persistence <> Mutability
- Findability <> Opacity
- Scale <> Brittleness
A few loose ideas that I’m still mulling, and that my dear readers may have opinions on:
Abundance is an important attribute of digital media. Economics is often defined as “the study of choice under scarcity.” If scarcity exists in digital systems (human attention, perhaps) it’s of a relatively novel kind in evolutionary terms. As I remarked in Why are virtual worlds so real?, synthetic worlds suppress much of the abundance of digital environments in order to be intelligible and enjoyable.
I’m not proposing that any of these attributes are brand new in human experience. I do claim, though, that individually and collectively they represent a phase transition in the situations we have to cope with. Each individual attribute gains its power from an amplification beyond what we’re used to:
Another weakness in the analysis so far: it’s essentially descriptive, and to some extent explanatory. I have yet to demonstrate that it has predictive power. If I were a social scientist I’d try to extract normative prescriptions, but I’m still too much of a geek for that.
Persistent artifacts aren’t novel. The pyramids have been around a while and will probably outlast most buildings in Second Life; and even intangibles like conversations last in people’s memories. However, the precision of recall and the volume of digital items that be recorded is sufficiently novel to drive a phase change.
On brittleness: We’re used to things breaking. However, the closest we’ve come to the brittleness of software until now has been legal documents where a misplaced comma can be very significant. However, in the legal case the “application” is “executed” in a court of law, where human concepts like intent and reasonableness can provide some resilience.
On scale: “The moment programs grow beyond smallness, their brittleness becomes the most prominent feature, and software engineering becomes Sisyphean.” – Jaron Lanier, Why Gordian Software has convinced me to believe in the reality of cats and apples.
Information overload can be seen as the consequence of combined excesses in Abundance, Findability, Persistence and Scale.
Monday, February 12, 2007
Programming thousands of cores is a hard research question. The shopping list of open tasks in a Berkeley white paper on this topic indicates just how far we have to go. I've observed some work on parallel computing, and I suspect that the cognitive limitations that have lurked below the surface in programming so far (e.g. limits on the number of independent variables humans can think about simultaneously) will rise up with a vengeance in parallel programming. We simply may not be able to figure it out.
Programming tools are surely part of the picture, but tools have typically automated and accelerated activities that humans have already mastered. It's an open question whether we can conceive of 1000-core parallel processing sufficiently well to create tools.
Hardware proposes, software disposes. . . . Chip innovation has always been exponential, and software innovation linear. Many-core chips will make matters qualitatively as well as quantitatively worse.
This mismatch between hardware and software is a weakness in the singulatarian argument that exploding compute power will soon trigger a phase transition in humanity and culture. All those CPU cycles are useless if they cannot be programmed efficiently, and our brains may be the limiting factor. We may need a transhuman mind to get through the barrier to transhumanity. Anyone got a time machine?
“How many networks can one person join? How many different identities can one person sanely manage? How many different tagging or photo-uploading or friending protocols can one person deal with?”
In his reply, O’Reilly remarks on the opacity of the network:
“When one of the big communications vendors (email, IM OR phone) gets this right, simply by instrumenting our communications so that the social network becomes visible (and under the control of the user), it seems to me that they could blow away a lot of the existing social network froth.”
Udell was focusing on getting social networks to critical mass, and O’Reilly was pointing the way to more usable applications. However, any solution will also have to deal with the attributes of digital artifacts that make them hard to deal with given our innate cognitive endowment. Scale and opacity are two; others include persistence, findability, and mutability.
Sunday, February 11, 2007
To the activists, music (or other digital content) is a substance. When they hand over money, they feel they have bought this substance. This “stuff” is turned into entertainment through rendering, and they thus feel they have the right to render it anywhere, and anyhow.
Rights holders think they’ve sold a permission: the right for a customer to enjoy specific content in a particular way. The physical item, if there is one, is mostly a token that the customer has the right to play that music in a particular way. Customers, at least if they’re activists, aren’t buying what the rights holders are selling.
Nobody had to think about this much until now, because the music “substance” and the performance “permission” were inextricably tied to a physical medium. With intangible media, though, the consequences of the models diverge:
- To the activists, DRM is a hindrance to their right to enjoy the music-stuff they bought in any way they choose.
- To rights holders, Fair Use is a diminution of the permissions they own.
Given these dissonant views among the experts, how do “ordinary consumers” think about the entertainment they buy? Do they see purchased music as a substance, as a set of permissions, some paradoxical mixture, or something else altogether? I doubt anyone has bothered to ask. The partisans don’t want facts to cloud the issue, and, as Peter Cowhey has explained to me, scholarship on public opinion and policy doesn’t care about the “what do they know” questions. Scholars are more interested in the heuristics that people use to decide complex issues – whose opinion can you trust on an issue you don’t know much about? – and in how the framing of an issue can determine outcomes. I think rights holders over-estimate the power of the permissions frame, which doesn’t resonate with the public’s assumptions about media; and activists over-estimate the degree to which the public experiences the DRM frame as a hindrance. I look forward to seeing the scholarship on this question. My hunch is that consumer thinking is closer to the activists’ view. (Aside: I’d be fascinated to see how many people who have an opinion about the term “DMCA” can explain why it’s a good or bad thing. I suspect even the experts will struggle.)
The dirt/digital differences may exacerbate the problem because digital media are so unlike the physical stuff humans have evolved to cope with. Perfectly copyable, persistent media are counter-intuitive, which accounts for rights holders’ terror of the “release once, gone forever” consequences of non-DRM content. (Not that DRM helps, since it will always be broken...) Opacity also plays a role: Digital media are hard to understand because they’re non-rival and non-excludable. DRM is deeply inscrutable, and even though the license terms sound simple, it’s often hard to understand why, or predict whether, media plays in some places and not in others. DRM tries to convert a non-rival, non-excludable information good into an excludable one. I’ve always found it hard to think about non-rival, non-excludable goods; perhaps I’m not alone.
The problem for rights holders is that there are few things in our everyday experience that fit their model of music as a rights-controlled good. It’s not that people can’t deal with the restrictions that come from “technological boundaries.” Brad Gillespie has pointed out to me that most people didn’t have problem moving from LPs to 8-track to cassettes to CDs. He observes that they can deal with restrictions based on physics – playing a CD on two different players at once, or making an LP play in a CD player – but they don’t accept “virtual restrictions.” The problem is that rights holders have over-estimated the persuasiveness of the permissions frame. They’ve also under-played analogies that might help them, like “buying digital music is like buying a book of concert tickets.”
Of course, as Brad Gillespie notes, legal restrictions like “don’t steal” are not imposed by physics. One thus has to confront the question of why humans obey laws. Rights holders have tried to appeal to consumer self-interest in ads where studio crafts people say that they wouldn’t have a job making movies if the pirates won. The problem is that there’s an over-supply of creators. That’s always been true (Q: "What does an English major say after graduation?" A: "You want fries with that?"), but finding an audience used to be hard. The distribution problem is now being solved by YouTube et al. The catch is that the Fat Rump still makes most of the money for Hollywood, and that the oversupply is in the Long Tail. Consumers like some Rump as part of a fully balanced media diet, but rights owners have trouble distinguishing between rump and tail.
Monday, February 05, 2007
Machines these days aren’t decipherable any more; you need to read a manual to operate a toaster oven, and men past a certain age bemoan the fact that they can’t work on their cars any more. Digital tools work in ways profoundly different from what we’ve come to expect from living in the dirt world. They are alien, not just opaque. Technology-mediated interactions are swamping our inherent cognitive capacities.
I’m working on a list of ways in which the digital world is different. It started off being descriptive, but I think it has some explanatory power. It may even eventually help predict gotchas in yet-to-be-seen systems. Here are the top five: persistence, findability, opacity, mutability, and scale .
danah boyd’s work on online social networks provided a very helpful basis for this list. My first four items were inspired by hers. Here’s danah in a WireTap interview with Kate Sheppard:
There are four functions that are sort of the key architecture of online publics and key structures of mediated environments that are generally not part of the offline world. And those are persistence, searchability, replicability, and invisible audiences. Persistence -- what you say sticks around. Searchability -- my mother would have loved the ability to sort of magically scream into the ether to figure out where I was when I'd gone off to hang out with my friends. She couldn’t, thank God. But today when kids are hanging out online because they've written [themselves] into being online, they become very searchable. Replicability -- you have a conversation with your friends, and this can be copied and pasted into your Live Journal and you get into a tiff. That creates an amazing amount of "uh ohs" when you add it to persistence. And finally, invisible audiences. In unmediated environment, you can look around and have an understanding of who can possibly overhear you. You adjust what you're saying to the reactions of those people. You figure out what is appropriate to say, you understand the social context. But when we're dealing with mediated environments, we have no way of gauging who might hear or see us, not only because we can't tell whose presence is lurking at the moment, but because of persistence and searchability.
Things that used to be ephemeral have become enduring. Conversations used to linger only in memory, but now they’re recorded. Letters might have survived in shoeboxes, but email is archived indefinitely. Digital artifacts persist because they are easily captured and copied, and because storage is cheap. Thinking out loud in a late-night email or blog post can come back to haunt us.
Like many other items on the list, it’s tricky to discern whether we’re seeing a difference in degree, or in kind. Humanity has had a long and fretful relationship with the recorded word. Plato – didn’t those Greeks think of everything first? – seems to have preferred the spoken to the written word. In the Phaedrus he has Socrates say, “He who thinks, then, that he has left behind him any art in writing, and he who receives it in the belief that anything in writing will be clear and certain, would be an utterly simple person ... if he thinks written words are of any use except to remind him who knows the matter about which they are written” .
We’ve become habituated to the unnatural persistences technology gives us. Still, a residue of magic remains, just as we are occasionally disturbed by seeing ourselves in a mirror. Our old brains are not used to such uncanny permanence.
As danah points out, digital search tools can make the previously hidden suddenly available. Again, we get used to it; I’m finding it increasingly difficult to write off-line, when I can’t check assumptions or follow promising leads instantaneously. Information about the world, and about people, is at our fingertips.
Even though we all take Google for granted, I suspect that we don’t have an accurate mental model of the consequences for material that relates to us. People’s surprise at how easy it is to make a profile about them from online information is a staple of identity theft news reports. It may be related to our tendency to over-value near-term events, and down-play the distant future (hyperbolic discounting).
The five distinctions I’m working through complement and contradict each other. Here’s the first example: Findability makes persistence meaningful. Not only do people leave a permanent spoor; these shreds and scintillas can be traced – though the stories they tell may easily be misconstrued.
I can seeeeeee you! – not. Asymmetric visibility is everywhere in digital spaces. In its social form, it’s danah’s “invisible audiences.” Anybody can watch you write yourself into being on the web, but you often can’t be sure who’s out there. Even if you do know, they may expect mutually incompatible things from you (parents vs. peers). As danah’s pointed out: until the web, only celebrities had to figure out how to deal with the problems of persistence, findability and invisible audiences.
Not only can’t we see the audience, we can’t see the system’s workings, either. We’ve been dealing with inscrutable intelligent agents for aeons, of course; other people, and the politics we’re immersed in. However, computer systems often don’t behave like humans, in part for the reasons I’m listing here. When they do try to emulate humans, it’s a thin veneer at best, and usually an embarrassing kludge.
Every computer user can list features that they find baffling. For example, I struggle to understand how various browsers implement saved RSS feeds. Often they don’t know that they don’t know. Many people with a non-technical background don’t understand what cookies do, and what information is being saved . When they’re exposed to cookie management tools for the first time, they’re shocked to discover how prevalent cookies are. Younger people understand that they pose a threat to personal privacy, but are resigned to it. The power of cookies shows how the various differences augment each other: cookies are not only persistent, findable traces, but are also hidden from many users.
Mysterious mechanisms are understood as magic. I hoped to find an extensive literature on superstitions about computers and software, but didn’t find any articles on a first pass. Such studies would be a fertile source of information about mental models, and could help predict how users absorb new technologies.
When the store layout changed in my local supermarket, everybody was disconcerted; people were still asking employees for directions months later. Magazine layouts change more often, and web designs more often still. In these cases one still has to do the redesign work and risk confusing your customers, but at least one doesn’t have to move all those atoms about. Digital artifacts are easier to change. This gives designers a great deal of flexibility, but at the price of destabilizing markers that are useful for navigation.
Change is also effectively invisible in the digital world. Flux, replication, multi-media, cross-over, impermanence and blending are rife on the web – but there are no palimpsests. The new blue pixel is as crisp and bright as the old green one. One can reveal a history, as wikipedia does, or track changes using databases, but it takes additional effort and/or money. Scholars using digital media are disconcerted by the lack of stable reference . If the content cited can change invisibly, how do you know what you’re referring to? With paper references you could, for example, refer to the “reknowned 11th edition of Encyclopædia Britannica” and be sure what about the target content. I often link to Wikipedia, but there’s no guarantee that a link will in future provide the information I’m expecting. Uncertainty is the price we pay for currency.
Digital media can also be easily copied and morphed from one place to another. danah’s replicability attribute (see above), where someone can cut and paste a conversation out of IM into their LiveJournal, is an aspect of mutability. Mash-up, remix and plagiarism become easier as mutability increases. They have always been part of culture, but as the barriers to copying come down, their prevalence goes up. Materials don’t have the persistence they used to have in the Stone Age; can our neolithic instincts keep up?
Mutability is a curious dual of the persistence attribute. Digital artifacts are more persistent because they’re so easy to copy; embarrassing videos are replicated across the web faster than they can be taken down. At the same time, though, changes are just as easy to propagate. As the clones proliferate, they can mutate.
Information overload is not just hype; we deal with hugely more factual and social information than people only a few centuries ago. Dominic Foray points out in The Economics of Knowledge that the great eleventh century thinker Gerbert d’Aurillac had a library containing no more than twenty books, which was quite a lot in those days. A Google search for his name yields about 24,800 hits. . . .
Human society evolved in the context of small groups, and the number of people we can have genuinely social relationships with seems to be limited to around a hundred people. This wasn’t a problem until very recently in evolutionary time: “In the early 1500s one could hike through the woods for days without encountering a settlement of any size. Between 80 and 90 percent of the population ... lived in villages of fewer than a hundred people, fifteen or twenty miles apart, surrounded by endless woodlands. They slept in their small, cramped hamlets, which afforded little privacy ...” .
The limitation is probably built in. Dunbar has shown that “mating system[s] and/or mean group size are a function of relative neocortex volumes in primates. This relationship suggests that the size of the neocortex imposes some kind of constraint on the number of social relationships that an individual can keep track of within its social group” .
Email and online social networks enable us to interact with thousands of people. That’s not new, in one sense, humans have been able to coexist with thousands of strangers without either killing them or running away since the invention of cities in the neolithic. However, digital socialization provides an aura of intimacy which triggers behaviors optimized for face-to-face behavior. The number of relationships we’re expected to participate in grows, but our means of dealing with them doesn’t. This can be dangerous, as persistence, findability, opacity and scale conspire to confound expectations: one is speaking to a large, invisible audience, and one’s words don’t go away. Microsoft executive Jim Allchin had learned the hard way during the 1998 anti-trust trial that emailed words, like “We need to slaughter Novell before they get stronger” could come back to haunt him. Yet, in 2004, he wrote this to what seemed at the time to be a small circle of colleagues: “I would buy a Mac today if I was not working at Microsoft.”
Intractable scale also characterizes software engineering. It’s a truism among software engineers that very large interlocked code bases are beyond the grasp of to any individual mind. I’ll be coming back to this topic in future; for now, I’ll just note that limitations on working memory apply to software engineering (see Problems Revisited), and software’s intangibility combined with ballooning memory and processing capacity encourage developers to combine more and more functionality into a single package.
Computing tools can help humans work at previously impossible scales, but many of them feel uneasy about that. Computer-assisted proofs, such as that for the four-color theorem, have generated controversy among mathematicians. Lengthy computer-assisted proofs that could not be replicated by humans somehow didn’t feel “real” to the skeptics. Perhaps proponents of computer-assisted proofs just need to wait, as Thomas Kuhn suggested, until “its opponents eventually die, and a new generation grows up that is familiar with it” – or perhaps the concern now raised will continue to linger, just as Plato’s unease about the written word still finds echoes today.
----- Notes -----
 The list is still evolving. Here are some aspects of digital life which don’t fit yet: the rigidity and fragility of computer systems; the tension between personalization and the one-size-fits-all design of most software; the non-rival and non-excludable nature of digital media.
 Plato, Phaedrus 275c. See also William Chase Greene, The Spoken and the Written Word, Harvard Studies in Classical Philology, Vol. 60, 1951 (1951), pp. 23-59.
 Vicki Ha, Kori Inkpen, Farah Al Shaar, Lina Hdeib, “Work-in-progress: An examination of user perception and misconception of internet cookies,” April 2006, CHI '06 extended abstracts on Human factors in computing systems CHI '06
 Bob Stein, personal communication, 19 Jan 2007
 William Manchester, A World Lit Only by Fire, Back Bay Books, 1992, pp. 50-2. Cited by delanceyplace.com 01/25/07-life in the 1500s
 R. I. M. Dunbar, “Neocortex size and group size in primates: a test of the hypothesis,” Journal of Human Evolution, Volume 28, Issue 3, March 1995, Pages 287-296
Friday, February 02, 2007
The CogNexus Institute defines a wicked problem as one for which each attempt to create a solution changes the understanding of the problem. Wicked problems seem to be inextricably linked to social context (Jeffrey Conklin, Nancy Roberts, Jerry Talley). A sociological approach is used to solve complex problems, even in technical settings like software development.
The root of wickedness lies in difficulties understanding such problems. No one person’s brain (processing + experience) is big enough to encompass the entire conundrum. Social animals succeed by dividing up problems among many brains – the wisdom of crowds. Each brain can take in and process its small part, but the interactions between the parts are now externalized as politics and conflict.
If many of the systems problems facing software developers are wicked problems (DeGrace & Stahl), and if wicked problems are a social solution to human cognitive constraints, then the root cause of many pernicious software problems is cognitive – not social.
Thursday, February 01, 2007
Dawkins observes, “We find real matter comforting only because we’ve evolved to survive in “Middle World” where matter is a useful fiction.” Middle World is “the range of sizes and speeds which we’ve evolved to feel intuitively comfortable with.” He argues that the world is much broader, extending it to sensory modalities that other animals can sense: sound for bats, and smell for dogs. What we see is a model of the world; the model is constructed to be useful to the body for which it is created. “The nature of the model is governed by how it is to be used.” He talks about “our brain’s evolutionary apprenticeship in Middle World.”
The challenge we face is that the uses to which we put our models of the world are changing radically, since we’re changing our world to contain not just solid matter, but increasing numbers of interacting abstractions. Our apprenticeship has not prepared us for this new world. We have to make crutches for our brains to succeed in it.