Wednesday, March 29, 2006

Network Integrity

Like so much else in Washington DC, the debate about the future regulation of consumer Internet access is being driven by extremists. It is time to stake out the middle ground.

Network integrity, the fact that any node can reach any other node, is the key Internet attribute that can and should be protected. The lack of effective competition in wireline Internet access justifies the imposition of non-blocking, transparency and inter-connection requirements on consumer Internet access providers. If providers can demonstrate that there is effective competition in a particular market, then these requirements can be waived.

Mandating pure network neutrality, that is, the prohibition of any shaping of traffic whatever by network operators is unrealistic (since it already exists) and risky (since it will slow down deployment of competitive broadband networks). The fact that some advocates for network neutrality were willing to accept the Annenberg Center Principles suggest that they’re willing to tolerate prioritized Internet traffic, as long as all sites remain available to consumers in at least some way. In other words: prioritization is acceptable, as long as there’s no blocking.

Trying to specify the parameters of the network in any detail (implicit in the Annenberg Center Principles and "basic tier" mandates) won't work. This approach presumes an efficient regulatory process which does not exist, and will lead to large amounts of regulation with uncertain and unintended consequences. There is good reason why light touch, technology neutral regulation is generally accepted as an important principle.

However, supporting light touch regulation does not mean that one should shy away from the fact that high speed consumer broadband access is today at best a duopoly, and will remain so for the foreseeable future. Both cable and telco industry players have a history of using incumbency to minimize competition (as any good company should), and one can assume they will seek to do so again in this area. While large companies like Google, eBay and Amazon can fend for themselves, emerging innovations deserve some protection.

The key property of the Internet that can and should be protected is integrity, that is, the ability of all end-points to connect to each other.

The Network Integrity proposal reasons as follows:

  • Effective competition is lacking in wireline consumer broadband access.

  • The lack of effective competition justifies the imposition of non-blocking, transparency and inter-connection requirements on wireline Internet access providers (see Network Integrity Conditions below).

  • If providers can demonstrate that effective competition exists in a particular market – they bear the burden of proof – then the Network Integrity Conditions on them can be waived.

The Network Integrity Conditions:

  1. Consumers should have access to their choice of legal content, including the freedom to run applications of their choice and attach any devices they choose to the connection in their homes. Blocking of sites, applications or data types is not allowed. Such freedom may be constrained by service plan limitations, or to prevent harm the provider’s network.

  1. The service provider is free to enter into arrangements with 3rd parties to improve content delivery, but may not degrade the service of specific 3rd parties.

  1. Service providers shall interconnect directly or indirectly with the facilities and equipment of other broadband networks

  1. Consumers should receive meaningful and understandable information regarding their service plans, including limits placed on usage, how the internet service provider prioritizes or otherwise controls content that reaches them, the disclosure of contracts with 3rd parties that affect the delivery of content, and how their personal information will be used

Monday, March 27, 2006

Complexity and Hard Intangibles

Both Kyril Faenov and Julian Bleecker have recently pointed me towards complexity as a way to explore Hard Intangibles.

Julian pointed me to Delix & Dum’s roadmap for research into complexity. They define the field this way:

“Broadly speaking, complex systems consist of a large number of heterogeneous highly interacting components (parts, agents, humans etc.). These interactions result in highly non-linear behavior and these systems often evolve, adapt, and exhibit learning behaviors.”

This description points toward two areas where I suspect our intuition is likely to fail: software, and exponential growth.

  1. Large pieces of software consist of many interacting components, and failure modes are often non-linear. (See my Software project management: an Innately Hard Problem.)
  2. Kurzweil argues that we are unable to grasp the implications of exponential growth because our intuitions lead us to linear extrapolations. Exponential growth is monotonically non-linear, and in fact a simple case of non-linear behavior in general.

Kyril referred me to Ko Kuwabara’s Linux: A Bazaar at the Edge of Chaos, which treats the Open Source development process as a complex adaptive system. He analyses the development of Linux as an evolutionary process, and claims to see self-organization at work.

Two questions come to mind:

  1. Is complexity “human-hard”, that is, is it hard for humans to think about?
  2. If so, why?

I’ve argued that non-linear equations are certainly “plain ol’ hard”, that is, difficult to solve whether one’s human or not, because their solutions are in general exquisitely sensitive to the initial variables, leading to lousy predictions.

I suspect complex systems are hard for humans to grasp, not least because studying such systems is a relatively research field; if it were an easy topic, it would’ve been addressed much earlier. As a practical matter, the difficulty most of us have thinking deeply and broadly about social problems suggests that even something we’ve evolved to be good at (inter-personal dynamics) becomes difficult at large scale.

The difficulty comes in part from the large number of variables in play in complex systems. It has been known for some time (cf. George Miller’s 1956 paper The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information) that we have an effective channel capacity of about 2.5 bits for thinking about a number of cognitive and perceptual tasks. Thinking fluently about complex systems would require us to hold a much larger number of concepts in short term memory.

A further complication is that these systems require us to think about processes rather than things. According to Carey and Markman [1],

“Adults represent the world in terms of enduring solid physical objects. People, rocks, and tables are all conceptualized in the same ways with respect to certain core principles: all physical objects are subject to the constraints that one of them cannot be two places at the same time, and that two of them cannot coincide in space and time. Objects do not come into and out of existence, and they occupy unique points in space and time. These aspects of our representation of the world are certainly deeply entrenched in the adult conceptual system.”

The interactions between large numbers of components certainly seem quite dissimilar from the concrete mechanisms which the foundations of our conceptual system are tailored to.

----------

[1] Susan Carey and Ellen M Markman, Cognitive Development, Ch. 5 p. 203 in the survey volume Cognitive Science, Bly and Rumelhart (eds.)

Friday, March 24, 2006

Media business beyond DRM

Blair Westlake alerted me to Bill Thompson’s column How to right the copyright wrongs, which might represent a groundswell in opinion around copyright. Thompson argues that DRM technology is doomed to fail, and he decries “[... the] belief that rigorous enforcement of technological restrictions, backed up by the ruthless application of draconian laws that allow the replacement of copyright with contract law and criminalise activities which used to be considered legal - or acceptable even when not clearly legal - will enhance the market, keep customers coming back for more and ensure the future success of the "content industries".

He goes on, “My opposition to DRM is not an opposition to copyright, or a claim that copyright is dead. But current attempts to use technology to enforce restrictions on use, restrictions that often go beyond those copyright law would demand, are unacceptable.”

As are columnists’ wont and privilege, he does not go beyond outrage to indicate workable alternatives.

Three recent experiences of mine point towards a solution:

  • Altruism. The content industry (like many businesses) is working on a flawed model of human nature. I’ve just come from retreat center which is entirely run by volunteers; room, board and teaching are all provided free. Human motivations are like the wings of a bird: selfishness and altruism. Both are needed to make our way through life. Neoclassical economics assumes just selfishness, and a decade of behavioral economics has shown how poorly such models predict what humans actually do. Some on the Left would have us believe, on the other hand, that All We Need is Love; that’s incorrect, too.

  • Anticipation. There was an article in the Week-end WSJ [1] about the applications of positive psychology (a relatively new field that studies what makes people happy) to business. Brian Knutson, a Stanford professor of psychology and neuroscience, argues that “anticipation is totally underestimated.” He studied the brain activity of people playing video games in which the anticipation of winning, and actually taking possession of ones loot, it were separate [2].

  • Analog. When I asked Matt Corwine how he made a living – he composes techno music, plays at dance parties, licenses audio clips to Japanese ad companies, etc. – he replied, “I sell everything that isn’t digital.” Another of his memorable quotes was, “Smart people treat recordings as a marketing expense.” Large music companies are beginning to tout themselves to bands as merchandizers: the CDs, yes, but also the tour and the t-shirts and the ring-tones.

Here are some implications that point towards a viable non-DRM media business model:

  • Assume Altruism. Ask people nicely to give you money in return for your work, and may be pleasantly surprised by their response – especially if you’re a neoclassical economist. Behavioral economics is now so established that companies can easily find a reputable academic to produce some numbers to keep their investors on-side.

  • Advertise into Anticipation. Product placement aside, it’s hard to make people watch ads while they’re in the middle of a media experience. Accept this, and make money from the moments before the media is consumed. (There may also be an after-glow which is exploitable.)

  • The Analog Halo. Money can be made where there is scarcity, and digital media are intrinsically non-scarce because they can be so easily copied. Hence, make money everywhere else. The whole premise of the “analog hole” – that revenues will leak out of non-DRM protected analog media – is wrong; in terms of business plan, there’s a Digital Hole and an Analog Halo.

----------

[1] Jeffrey Zaslow, Happiness Inc., Wall Street Journal March 18-19, 2006, p. P1. Subscription required.

[2] See eg Brian Knutson and Richard Peterson, Neurally reconstructing expected utility, Games and Economic Behavior 52, 305-315

Wednesday, March 22, 2006

The Cathedral’s fallen down again

It is said that Medieval architects learned to build cathedrals by adding blocks until the structure fell down, and then trying it again a little differently. At some point, though, limestone and flying buttresses reached their physical limits – one can build a Gothic cathedral only so high, no matter the design.

Microsoft announced yesterday that the consumer version of Vista, its new operating system, will be late. The server version is still on schedule. This slip comes on top of the underlying massive delay in this OS upgrade called Longhorn.

The consumer wing of the Windows cathedral has fallen down, to no-one’s great surprise. The Windows code base was known to be at the limits of engineerability even when Windows XP shipped. The delay in Longhorn was the first admission that it might be inching over the cliff.

Many mundane reasons are given for the delay: security, usability, backward compatibility, etc. The underlying reason is the business decision, which goes back at least to the integration of IE into the OS (a decision that’s recently been reversed), that an integrated Microsoft code hairball was the best way to lock in customers and prevent the break-up of the company.

I must now retract the physical analogy which led this post. While cute, it’s wildly inappropriate. A large software system and a big building have very little in common, structurally or in engineering terms. Software doesn’t suffer from gravity, pieces can’t be easily isolated, it’s complexity is vastly greater.

And yet… using concrete metaphors to think about abstractions is perhaps the deepest reason (physical metaphor ;-) why Vista has slipped. Neither ordinary folk nor the engineers building Vista can avoid such thinking, since human cognition is grounded in the physical world. Unfortunately, the Windows operating system has become simply too vast for the unaided brain to think about, and tools to help us do so have not yet been perfected.

Tuesday, March 21, 2006

Problems with “basic access broadband” mandates

The Annenberg Center Principles for Network Neutrality, released yesterday, hinge on the notion of “basic access broadband”, a guaranteed minimum level of neutral Internet connectivity.

The Principles are a worthy contribution to the debate about the regulation of US broadband. (Disclosure: I was a participant in the workshop where these ideas were discussed.) It fleshes out the middle ground between the positions of net neutrality purists and free market purists. The Principles bring together some shared values (win/win for operators and consumers, light touch regulation, transparency, and competitive entry) in a way that marks out some common ground.

However, I’ve realized that the key innovation in the Principles – basic access broadband – is attractive in principle but unworkable in practice.

Here’s a summary of the problems, which I’ve detailed in more detail here:

  • One can’t set a meaningful “national minimum absolute speed”
  • One can’t set a meaningful “percentage of bandwidth”
  • Basic Access Broadband as a tier leads to price control, which the FCC can’t deliver
  • Basic Access Broadband as a floor will not prevent bias in favor of some content providers
  • A mandated minimum precludes socially beneficial offers, like very cheap but non-neutral broadband access
  • A plethora of parameters will have to specified, leading to endless politicking and litigation
  • Basic Access Broadband doesn’t prevent bias in favor of rich players, but will exact a high price in unintended consequences and regulatory capture

In short, while Basic Access Broadband sounds simple, its implementation will be dangerously complex. In the words of Yogi Berra: "In theory, there is no difference between theory and practice. In practice, there is."

Sunday, March 19, 2006

You might be getting old if

With a nod to Jeff Foxworthy: You might be getting old if...

your favorite radio channel is called “Classic Something-or-other”

the text on web sites seems to get smaller every month

you believe wisdom is a much more important virtue than quickness

your body breaks and doesn’t heal

you’re getting better at giving advice, and at taking it

a nap seems like a great way to spend a Saturday afternoon

Thursday, March 16, 2006

Magic for the imagination

Christopher Ireland had a fascinating reaction to my recent post asking Why are Virtual Worlds so real? She writes (emphasis added):

“On the topic of games and magic, I just bought the latest version of Civilization (IV). This is a game I've played since it was created. […]

“Part of why it can hold my attention so long is its magic elements. In my case, the magic is a combination of perspective and narrative. The perspective magic is being able to see and impact an entire world. In no other situation can I have that vast of a view and that complex of an interaction. The narrative magic is a little harder to explain but it's probably the bulk of the appeal. As the player, I’m trying to see if my philosophical beliefs can "win," and in the game I have the ability to do "what ifs" that are not possible anywhere else. I can try out numerous different scenarios and approaches, all held together by the narrative of cultural growth and expansion. The fact that i can try this over and over in any number of different permutations with full resolution of cultural elements is magic to me. […]

“I think what makes some games seem "magic" to me (and maybe many others) is that they require an integration of that which the software directly creates and that which the software allows me to create. If it was just a product of the software, i would feel more like a spectator and the magic would not be any different from what i see on TV or in a movie.”

Christopher’s distinction complements the taxonomy I was exploring in the earlier post. I was trying to match cognition, physical reality, and the kinds of games that make sense to users. Her examples fit in my Category 2 (magic we can imagine, and that is found in games), but go well beyond the simple physical cases I listed. I think she’s describing two kinds of way in which the software extends our imaginative reach:

  1. Perspective magic allows one to have a “sensory” view which one can’t attain physically.
  2. Narrative magic allows an “emotional” view which one couldn’t have had otherwise.

Her two cases align with the distinction philosophers and cognitive scientists make between “sensory experiences” (seeing a flower or hearing a piano sonata), and “propositional attitudes” (psychological states like belief or desire). Perspective magic enables a new kind of sensory experience, and narrative magic allows new propositional attitudes.

Monday, March 13, 2006

The 20th Century Blip

Matt Corwine, a musician and writer, said to me the other day that the last hundred years will probably be seen as a blip in the music industry. The current decentralization of production and distribution is returning us to a time when making and listening to music is often a personal affair, much like it was before the big media companies came to power in the early 20th Century.

The news business may also be emerging from a bubble. “Authoritative news” was an artifact of the post-War social consensus. The Big Three US TV networks had their heyday in the Fifties and Sixties, and the period since then has also been a time of consolidation of print and radio. Yellow journalism and flaming polemic is the default mode for the media, and we’re returning to it. It’s not the end of the world; given the choice, I’d take Fox News over Fifties Conformism any day.

I shared this insight with Peter Rinearson, who then asked a great question: Were the last fifty years an aberration in other ways, too? We kicked some ideas around, and came up with this list:

  1. A centralized music industry
  2. Authoritative news
  3. Unquestioned support for intellectual property rights
  4. Peace and stability
  5. Rapid innovation
  6. Being overwhelmed by technological change
  7. No global epidemics

Not all of these bubbles were the same length – and my confidence that they’re temporary phenomena varies.

Intellectual property rights

The rise in the enforcement of intellectual property rights is relatively recent. The Economist’s Oct 2005 survey of patents and technology reports that immediately after America's declaration of independence, its government made it official policy to steal inventions from Europe, expediting the country's rise as an industrial power in the 19th century. Poor countries are increasingly combative about the intellectual property demands being made on them by rich countries, and the sharing of digital media is forcing a reexamination of the social contract embodied in copyright law. We could end up back in a world where intellectual property rights are only a secondary method for protecting investments. As Corwine said to me, “[I’m] selling things that can’t be digitized.”

Peace and stability

If one discounts the US wars in Korea and Vietnam, and other terrible local conflicts, the last fifty years has been a remarkably peaceful time. In the preceding half-century there were two world wars. In the 19th Century there were many clashes of empire: the Napoleonic Wars (1799 – 1815), the Crimean War (1853-56), the Franco-Prussian War (1870-71) and the Russo-Japanese War (1904-1905).

Rapid Innovation

Innovation has been accelerating since the 1850s. Some scholars claim that the innovation of the computer age isn’t in the same league as that of earlier periods (eg Robert Gordon - PDF), or that innovation per capita is slowing (eg Jonathan Heubner - article). I’d guess that innovation will continue to accelerate since new innovation feeds on old in a cumulative way. However, society’s ability to absorb innovation has human limits (see my post The long wobble from idea to implementation), and it’s possible that we may start bumping up against them sooner rather than later.

Response to Change

All this innovation has lead to constant change, people have felt overwhelmed by new technology for the last 150 years. I think we’ve now got used to it; change is now so institutionalized that a slow-down in innovation is probably a greater social and economic risk than a continuation of the current rate. And if innovation flattens out, such a wrench is in the offing.

Global Epidemics

We’ve been lucky with pandemics – last big one was the flu of 1918 which killed 50-100 million people. (About 20 million people died of AIDS in the first 20 years of the epidemic.) But rise of the rapid international movement of both food and people means that infectious diseases are much harder to localize than they used to be. On the other hand, biochemistry is now a very advanced science, and our understanding of diseases and ability to respond is unprecedented. Your position on this issue boils down to this: Who can innovate more rapidly, viruses under evolutionary pressure or scientists? In other words: Are You Willing To Bet Against The Bug?

Friday, March 10, 2006

You’re as likely to lose an email as the checked bag on your next flight

More email is lost than I thought. The loss rate is at least 0.7%, or 7 messages in 1,000 [1]. If you send ten messages a day, this means 25 will go astray in a year.

For comparison, airlines operate with 3 to 10 lost baggage reports per 1,000 passengers on major airlines [2].

As for the old-fashioned way of doing it: the British Royal Mail has admitted that more than 14 million letters and parcels were lost, stolen, damaged or tampered in 2005, out of 22 billion items handled. That’s 0.06% -- ten times better than email!

--------------------------

[1] Research cited in Sharad Agarwal, Venkata N. Padmanabhan and Dilip A. Joseph, SureMail: Notification Overlay for Email Reliability: “Afergan and Beverly [4] report that some mail servers exhibit a silent loss rate of over 5% while Lang and Moors [8, 10] report an overall silent loss rate of 0.69%.”

[2] CNNMoney.com, Grumbling grows among airline passengers, 17 Feb 2006 citing US DoT statistics. The loss rate in terms of bags checked will be different, but hard to know how: increasing numbers of people don't check luggage, reducing baggage/passengers; but those who do check luggage usually check multiple pieces.

[3] BBC News, Royal Mail fined for missing post, 10 Feb 2006

Tuesday, March 07, 2006

Why are virtual worlds so real?

I’ve become interested in virtual worlds for the way they can cast light on the limits of our ability to function in information spaces. [1]

I’m struck by the rather limited kinds of magic that one finds in spaces where, technically, anything should be possible. Understanding why some kinds of magic don’t “work” should help us map the limits of our physical intuition when applied to non-physical worlds – like information systems.

Scarcity

In his seminal 2001 paper, Ted Castronova compares the current crop of virtual worlds (VWs) with their predecessors, the failed first generation “avatar spaces” like Alpha World. He observes:

Their failure helps identify the source of the success of VWs, because there really is only one major difference between these avatar spaces and VWs: Scarcity. Nothing was scarce in the avatar space. A user could create as many avatars as desired; all avatars had equal abilities; the user could build without limit, as long as the desire to write code persisted. The activities of one avatar posed no real obstacle and imposed no significant cost on any other avatar's activities. And, somewhat shockingly, scarcity is what makes the VW so fun.

It’s hard work to add scarcity to synthetic world, since the “natural state” of software is perfect, free, infinite replication. Here’s a Second Life Herald report on a mania for hand-made tulips that is raging in that world:

In a world sated with freebies and copies of just about everything bulging the avatar inventories, the exquisite prim-crafted tulips of Desmond Shang have sparked a Tulipmania worthy of the old meaning of the term for creating scarcity and paying high prices in secondary markets.

Shang is reported to have said that, “concern that they might be duplicated somehow did prove to be a huge headache.”

Three kinds of magic

If we use the term magic to describe things which don’t occur in the physical world [2], we can define three kinds:

  1. Magic we can’t imagine
  2. Magic we can imagine, and that is found in games
  3. Magic we can imagine, but that is not found in games, or that is found but doesn’t seem to lead to “interesting” game play

I can’t give examples of the first category, by definition. It’s a squishy category since some people are more imaginative than others. However, I believe that this is a non-empty category, and one that might shrink over time as our imagination is trained through cultural development.

Category 2 (magic found in games) includes the following:

  • Accelerating existing effects – healing, teleportation
  • Humans doing what natural forces do – lighting bolts, water spouts
  • Changing nature of materials – strengthening armor, transmutation of elements, summoning something from nothing
  • Non-physical effects, eg invisibility

(Finding/creating an authoritative taxonomy of workable game magic is a ToDo of mine; any pointers gratefully received.)

Category 3 (magic we can imagine, but that is not popular in games) includes features which are technically easy to implement, but which game players (and by assumption, humans in general) find hard to deal with:

  • Abundance – scarcity seems necessary to make worlds fun. Exhaustion and constraints apply to magic in category 2: one can make fire appear, but are then exhausted for a period.
  • Living in three dimensions – not just flying from point to point, but living in a way not constrained by moving in a plane
  • Richly linked stories – hypertext narratives were all the rage a decade ago, but have proven sterile as a genre; humans need a beginning, middle and end

This third category provides a tool for understanding the limits of cognition applied to knowledge-based systems

Relevance to Software

Software seems to be Category 3 Magic: we can (obviously) imagine it, but it goes against the grain of our intuitions.

  • Abundance is a given with software, and with digital goods in general: they’re perfectly copiable at zero marginal cost
  • Software is a high-dimensional artifact. As Allen Brown pointed out after I gave him my “hypercube” example: “The mathematical abstraction of problems with any sort of dynamics seems to result in a state space with four or more dimensions. [The] problem of concurrency is exactly such a problem: (at least) two agents execute two pieces of code on two time lines. This results in four dimensions.”
  • A program executes in a richly linked way as soon as it can call sub-programs. One can’t describe its functioning as a story that would be intelligible to a human being.

We begin to see why large software projects so often go wrong: they’re hard for humans to think about. (See my earlier posts Software project management: an Innately Hard Problem and Hard Intangibles) Since software is a good model for complex, real-time interlinked processes in general, one might expect that these, too, will provide a challenge for unaided human thought.

----------

[1] The rest of boomer society has also suddenly become interested – see all the coverage of Castronova’s work, and Beck & Wade’s book Got Game.

[2] Some resources listing kinds of magic:
http://www.frontiernet.net/~jamesstarlight/Eightmagics.html
http://www.santharia.com/magic/magic.htm
http://en.wikipedia.org/wiki/Magical
http://www.eqmages.com/index2.php3?page=spells

Sunday, March 05, 2006

The long wobble from idea to implementation

It can take more than fifty years for an idea to jump from toyland to academia, and 10-15 years for an engineering insight to sink in among practitioners, according to a New Scientist story on robotics [1].

Until recently, robot designers crammed more and more servos and sensors into their robots legs in an attempt to direct joint movement. While this works after a fashion, it turns out that a simple tottering motion is the best way to get robots to walk. In 1938 American inventor John Wilson patented a waddling toy, the Wilson Walkie, that could walk itself down a gentle slope.

In the early 1980s Thomas McMahon at Harvard started working on this insight in his lab. Tad McGeer took up this work, publishing a series of papers in the 1990’s showing that a passive machine, with or without knees, could walk stably downhill with a human-like gait. It has taken 10 to 15 year for the work to sink in. In the last year, three separate teams unveiled two-legged robots that exploit the Wilson Walkie’s principle of motion.

Thomas Kuhn pointed out that "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." The cultural friction on the adoption of new ideas puts an upper limit on the absolute innovation rate as long as humans are in the loop, and would dampen the approach to the innovation singularity.

-----------

[1] Robot special: Walk this way, New Scientist, 04 February 2006 (subscription required for full access)

Friday, March 03, 2006

Singing for their supper


I recently saw Gregory Colbert’s show “ashes and snow” in Santa Monica. The art cognoscenti look down on Colbert; his work is too accessible and too popular.

This condescension made me think about the very different attitudes artists and scientists have towards popularizing their work. Scientists make a point of trying to be understood. Many of the most prominent (Hawking, Greene, Pinker, Gould) write books accessible to the lay public. Of course, scientists need more money – telescopes and biotech machinery don’t come cheap. But I sense that scientists want to share their passion by making it accessible, whereas artists don’t.

Visual artists – the fine arts, that is, not movies – don’t deign to explain their work. A charitable explanation is that they think their work appeals so directly to the emotions that it doesn’t need to be explained. Perhaps they don’t feel that writing about art helps. Apparently a famous dancer (choreographer? couldn’t find the quote online) once said this after being asked about the meaning of a piece: “If I could explain it, I wouldn’t have to dance it.”

For my money, they just can’t be bothered to talk to the unwashed masses. Fine Art is a much more insular creative occupation than science. One only has to satisfy a miniscule audience of curators and collectors to succeed commercially, and one has to satisfy no-one but yourself if you’re a purist. Scientists have to pass peer review and extract grants from bureaucracies.

Artists always bemoan the lack of government support for the arts. The National Endowment for the Arts budget is about $120 million/year. The National Science Foundation budget is more than $5 billion. Purely on the basis of these numbers, one might say that the sciences are deemed to be fifty times more socially useful than the arts. (Curiously, elephants weigh about fifty times as much as people.) If artists did a better job at popularizing their work, perhaps the gap would not be quite so big.