Tuesday, January 30, 2007

Problems Revisited

In Kinds of Hard (ctd.) I speculated that one could tease apart problems by looking at their analytical, social and cognitive content.

Analytical problems can be clearly formulated, if not necessarily easily solved. Scientific problems like “what is the age of the universe” and “how does evolution work” are prototypical. The vagaries of human nature are not explicitly involved, though since science is done by people activity, a human perspective is unavoidable.

Social problems are characteristic by conflicts between people. They occur when there are many stakeholders, and arguments ensue not only over what’s an acceptable solution, but also over what counts as “the problem.” An example: what should America do regarding immigration?

In cognitive problems, individual human frailty is the determining factor [1]. There are two aspects: capacity and bias.

The scale of some problems is beyond the brain’s processing capacity. Humans have a limited working memory, and a limited capacity to understand interactions between many variables. For example, we can handle at most four-way interactions between independent variables, e.g. discerning patterns in graphs of consumer preferences for cakes that are {fresh vs. frozen}, {chocolate vs. carrot}, {iced vs. plain} and {rich vs. low fat} [2]. This capacity limit is not a memory limit - all relevant information can be seen continuously - but a limit on how many relationships we can discern simultaneously. As another example, consider 3D block rotation, which is a common feature of IQ tests: the subject has to decide by looking at 2D projections whether one shape is the same as another, though rotated. I haven’t found studies on this yet, but I expect that humans would find it impossible to do 4D block rotation, that is, decide whether two 3D shapes represent a rotation of a 4D shape.

The second aspect is cognitive bias. Our in-born heuristics often yield results that are different from analytical solutions, or that are different depending on how choices are presented. Our loss aversion is evident in this example: Imagine that your city is preparing for an outbreak of a disease which is expected to kill 600 people. Let’s say the choice is between two vaccination schedules, Program A which will allow 400 people to die, and Program B which will let no one die with probability 1/3, and all 600 will die with probability 2/3. Most people will choose option B, though the two situations are identical in quantitative terms (0 x 1/3 + 600 x 2/3 = 400); the certain loss of life is more loathsome that probable losses. Caveat: Some researchers argue that the choice of “correct” solution, and the way in which the question is framed makes a big difference in outcomes; they argue that many alleged biases are largely unsupported [3].

Problems usually show more than one aspect. It gets interesting when there are overlaps between the facets:

Analytical x Cognitive

The Monty Hall problem asks whether, given a certain set-up, contestants should stick with a prior choice, or change their minds. Probability theory (and simulation) indicates that they should change their choice. For most people, that feels like the wrong answer – even for some of those who’ve worked through the math. This is an analytical problem, since it is well-formulated and can be solved using probability theory. However, given that so many people find the correct solution counter-intuitive, it’s also a cognitive problem.

Analytical x Social

Scientific paradigm shifts are good examples of the interplay between analytical and social problems. On the one hand, science engages with problems that are well-defined, with recognizable solutions; in other words, with analytical problems. On the other, the salient questions at any given moment are influenced by intellectual fashion and history, and the data that are used to test hypotheses are shaped by assumptions: “It is the theory that decides what we can observe” [4]. The seventeenth-century shift away from Aristotelian physics was an argument over whether natural phenomena should be explained in terms of mechanical interactions, or teleology. This social tussle framed the analytical questions.

Social x Cognitive

Decisions about large community investments – energy policy, for example – are politically fraught processes that are inherently social. There are conflicts over what needs to be done, and how to do it. Cognitive factors can play a role when risk assessment is involved. Lay people inexperienced with probabilities often come to different conclusions about risks to experts. For example, they overestimate the frequency of low-probability but dramatic hazards like nuclear power plant accidents, and under-estimate high-probability hazards that are less memorable, like everyday causes of death. In this case, discrepancies between analytical and subjective risk estimates can influence the social dynamics of a debate.

Analytical x Cognitive x Social

My research is headed towards the challenge of software development, where I suspect all three problem traits intersect and amplify each other. The development of large code bases clearly poses severe social problems, since there are many stakeholders with conflicting needs and views. Specifying a new product is an iterative and indeterminate process, and what counts as a solution – the product to be shipped – is the subject of intense conflict as feature triage intensifies towards the end. The analytical problems of choosing the best algorithm and producing efficient code are easily hidden by the foam of debate, but are nonetheless crucial to success.

Cognitive problems are perhaps the most obscure of all, and appear in both aspects mentioned above. Biases abound: teams become possessive about their features (endowment effect), executives only see information that confirms their preconceptions (confirmation bias), and managers rely over-much on one piece of information when making decisions (anchoring). At least some of the biases, like anchoring, are related to the overwhelming scale of the problem. Large code bases are beyond the grasp of any human, and therefore there is a limited capacity to reason about them – say, to trade off one design choice against another, or to imagine the consequences of a feature change.

Computer science studies the analytical problems, and theorists of industrial organization focus on social problems. The cognitive problems of producing software are less well understood – stay tuned.

--------------------------------

[1] The suitability of the word “cognitive” is debatable. Since all problems involve human thought, they’re all cognitive in some sense. However, the complications of cognitive capacity and bias strike me as different in kind from social and analytical constraints, so I’ll use the term “cognitive problems” until I find a better one.

[2] Halford, G. S., Baker, R., McCredden, J. E., & Bain, J. D. (2004). How many variables can humans process? Psychological Science, 16, 70-76.

[3] See Alexander Todorov (1997) , “Another Look at Reasoning Experiments: Rationality, Normative Models and Conversational Factors” Journal for the Theory of Social Behaviour 27 (4), 387–417. See also Judgment Under Uncertainty, UCSB Center for Evolutionary Psychology (Cosmides & Tooby?)

[4] Albert Einstein, from J. Bernstein, "The Secret of the Old Ones, II." New Yorker, March 17, 1973, cited in http://chem.tufts.edu/AnswersInScience/What-Is-Science.htm

No comments: