Tuesday, February 07, 2006

Hard Intangibles

Imagine a square drawn on a plane surface; now imagine a cube, which is a square in three dimensions. How many edges does the square have? The cube? Now imagine a hypercube, a square in four dimensions – how many edges does it have?

Answering the first two questions is easy, since we can visualize squares and cubes; answering the third is harder, since we can’t imagine four dimensions. Of course, calculating the number of edges for an n-dimensional hypercube is trivial if you know a little maths (it’s n.2^(n-1)).

I divide hard problems into two categories: those that are hard for human intuition to handle (called human-hard problems) and those that are hard for reasons intrinsic to the problem (called plain ol’ hard problems). Figuring the number of edges for an n-dimensional hypercube is human-hard for n bigger than 3, but isn’t plain ol’ hard.

Human-hard Problems

I define human-hard problems as ones that we find hard to solve by intuition (which is partly innate and partly learned).

The entire field of behavioral economics is devoted to ways in which people are not “rational agents” in economic terms. We find it hard to come up with solutions which seem obvious technical perspective of classical economics. Here’s an example from Mullainathan & Thaler’s entry on Behavioral Economics for the International Encyclopedia of the Social and Behavioral Sciences:

An example involving loss aversion and mental accounting is Camerer et al’s (1997) study of New York City taxi cab drivers. These cab drivers pay a fixed fee to rent their cabs for twelve hours and then keep all their revenues. They must decide how long to drive each day. A maximizing strategy is to work longer hours on good days (days with high earnings per hour such as rainy days or days with a big convention in town) and to quit early on bad days. However, suppose cabbies set a target earnings level for each day, and treat shortfalls relative to that target as a loss. Then, they will end up quitting early on good days and working longer on bad days, precisely the opposite of the rational strategy. This is exactly what Camerer et al find in their empirical work.

This mismatch between the behavior of humans and the mythical Homo economicus wouldn’t matter if economics were only a model. However, society has used the models of classical economics to create artifacts like financial markets and lotteries where the mathematics, not human experience, is the defining principle. Human biases like anchoring, risk aversion, confirmation bias, false consensus effect, the illusion of control have real-world consequences in such worlds. Acting rationally in places premised on classical economics is human-hard.

To take another example of human-hard problems, Kurzweil has argued eloquently (and, in this case, persuasively) that we are unable to grasp the implications of exponential growth because our intuitions lead us to linear extrapolations of technological progress.

Humans also struggle to deal with very small and very large numbers, from grasping the relative sizes of atoms and nuclei, to reasoning about the budget deficit. Dealing with such numbers is not a “plain ol’ hard” problem; we have mathematical notation and tools for dealing with them trivially. However, the problem arises when we try to reason about their meaning using metaphors and concepts operating at human scale.

More difficulties arise when small and large numbers are multiplied, and when non-numerical considerations matter, eg in risk-magnitude assessments. There are notable divergences between expert and public assessment of risk because the public expands the concept of risk to include various non-damage attributes. The very same risk — as experts see these things — would be understood quite differently by the lay public depending on how it weights considerations like familiar/unfamiliar, chronic/acute, or immediate/delayed, which not usually included in quantitative risk assessments.

I suspect that thinking about software and other digital goods are human-hard problems. It’s clear that people behave as if email communications and blog posts are as private and ephemeral as those on paper (see my Dad, how dare you read my xanga?). They treat knowledge as if it’s a physical thing, leading to bad business judgments (see my Extending software patents: Those who live by the sword, die by the sword). I’ll have much more to say on this topic; stay tuned.

Plain ol’ hard Problems

Plain ol’ hard problems are ones that are difficult to solve whether you’re human or not.

One example is solving non-linear equations. When outcomes are exquisitely sensitive to the initial variables, predictions will be lousy (cf chaos theory). That’s why weather forecasts don’t go much further out than three days. Throwing more computer power at the problem doesn’t extend the time scale, though it can give you finer physical resolution (eg hyper-local weather forecasts for a sports venue vs. a whole city). The halting problem in computer science is another example; it’s impossible to provide a general method for deciding whether a computer program will terminate in a finite time.

Gray zones

For now I’m going to skate blithely over the complication that the only problems we can think about are human-conceived problems. The maths we might use to show that a human-hard problem is technically trivial is a human artifact itself. Further, a plain ol’ hard problem may not be intrinsically hard; it may only be hard because the maths we’ve been able to come up with can’t handle it. Factoring large prime numbers is very hard to do as far as we know, and is the basis of many cryptographic systems. However, some mathematical genius may suddenly stumble on a way to do it easily; we just don’t know.

Human-hard problems are difficult to pin down because they’re dependent on formulation. People can analyze complex scenarios very easily when they’re described in terms of people and social power, but they do poorly when they’re abstracted. This is as one would expect – we probably have built-in mechanisms for “calculating” social relationships.

I wouldn’t put dry stone walling or cooking in either of the above two categories. These activities are hard, but learnable. The key distinction for my purposes is that they’re both physical activities; within the limit of variation of hand-eye coordination, most people can learn to do them. By contrast, activities that are mostly intellectual like math and software development show striking divergences between the few really good developers, and the rest. In civil engineering, according to my in-house coastal engineer, good people are better than average people, but by a factor of 2, not a factor of 100s or 1,000s as in software.


Human beings mostly think about abstract notions in concrete terms, using ideas and modes of reasoning grounded in their senses, muscles, and experience. Our thinking is embodied, that is, shaped by the structure of our brains, our bodies, and everyday interactions in the world. We have developed tools like mathematics for thinking about abstract things like hypercubes, but our intuitions about them tend to be concrete. As the knowledge economy fill our world with more abstractions, I would expect the number of human-hard problems to multiply.