Tuesday, January 30, 2007

Problems Revisited

In Kinds of Hard (ctd.) I speculated that one could tease apart problems by looking at their analytical, social and cognitive content.

Analytical problems can be clearly formulated, if not necessarily easily solved. Scientific problems like “what is the age of the universe” and “how does evolution work” are prototypical. The vagaries of human nature are not explicitly involved, though since science is done by people activity, a human perspective is unavoidable.

Social problems are characteristic by conflicts between people. They occur when there are many stakeholders, and arguments ensue not only over what’s an acceptable solution, but also over what counts as “the problem.” An example: what should America do regarding immigration?

In cognitive problems, individual human frailty is the determining factor [1]. There are two aspects: capacity and bias.

The scale of some problems is beyond the brain’s processing capacity. Humans have a limited working memory, and a limited capacity to understand interactions between many variables. For example, we can handle at most four-way interactions between independent variables, e.g. discerning patterns in graphs of consumer preferences for cakes that are {fresh vs. frozen}, {chocolate vs. carrot}, {iced vs. plain} and {rich vs. low fat} [2]. This capacity limit is not a memory limit - all relevant information can be seen continuously - but a limit on how many relationships we can discern simultaneously. As another example, consider 3D block rotation, which is a common feature of IQ tests: the subject has to decide by looking at 2D projections whether one shape is the same as another, though rotated. I haven’t found studies on this yet, but I expect that humans would find it impossible to do 4D block rotation, that is, decide whether two 3D shapes represent a rotation of a 4D shape.

The second aspect is cognitive bias. Our in-born heuristics often yield results that are different from analytical solutions, or that are different depending on how choices are presented. Our loss aversion is evident in this example: Imagine that your city is preparing for an outbreak of a disease which is expected to kill 600 people. Let’s say the choice is between two vaccination schedules, Program A which will allow 400 people to die, and Program B which will let no one die with probability 1/3, and all 600 will die with probability 2/3. Most people will choose option B, though the two situations are identical in quantitative terms (0 x 1/3 + 600 x 2/3 = 400); the certain loss of life is more loathsome that probable losses. Caveat: Some researchers argue that the choice of “correct” solution, and the way in which the question is framed makes a big difference in outcomes; they argue that many alleged biases are largely unsupported [3].

Problems usually show more than one aspect. It gets interesting when there are overlaps between the facets:

Analytical x Cognitive

The Monty Hall problem asks whether, given a certain set-up, contestants should stick with a prior choice, or change their minds. Probability theory (and simulation) indicates that they should change their choice. For most people, that feels like the wrong answer – even for some of those who’ve worked through the math. This is an analytical problem, since it is well-formulated and can be solved using probability theory. However, given that so many people find the correct solution counter-intuitive, it’s also a cognitive problem.

Analytical x Social

Scientific paradigm shifts are good examples of the interplay between analytical and social problems. On the one hand, science engages with problems that are well-defined, with recognizable solutions; in other words, with analytical problems. On the other, the salient questions at any given moment are influenced by intellectual fashion and history, and the data that are used to test hypotheses are shaped by assumptions: “It is the theory that decides what we can observe” [4]. The seventeenth-century shift away from Aristotelian physics was an argument over whether natural phenomena should be explained in terms of mechanical interactions, or teleology. This social tussle framed the analytical questions.

Social x Cognitive

Decisions about large community investments – energy policy, for example – are politically fraught processes that are inherently social. There are conflicts over what needs to be done, and how to do it. Cognitive factors can play a role when risk assessment is involved. Lay people inexperienced with probabilities often come to different conclusions about risks to experts. For example, they overestimate the frequency of low-probability but dramatic hazards like nuclear power plant accidents, and under-estimate high-probability hazards that are less memorable, like everyday causes of death. In this case, discrepancies between analytical and subjective risk estimates can influence the social dynamics of a debate.

Analytical x Cognitive x Social

My research is headed towards the challenge of software development, where I suspect all three problem traits intersect and amplify each other. The development of large code bases clearly poses severe social problems, since there are many stakeholders with conflicting needs and views. Specifying a new product is an iterative and indeterminate process, and what counts as a solution – the product to be shipped – is the subject of intense conflict as feature triage intensifies towards the end. The analytical problems of choosing the best algorithm and producing efficient code are easily hidden by the foam of debate, but are nonetheless crucial to success.

Cognitive problems are perhaps the most obscure of all, and appear in both aspects mentioned above. Biases abound: teams become possessive about their features (endowment effect), executives only see information that confirms their preconceptions (confirmation bias), and managers rely over-much on one piece of information when making decisions (anchoring). At least some of the biases, like anchoring, are related to the overwhelming scale of the problem. Large code bases are beyond the grasp of any human, and therefore there is a limited capacity to reason about them – say, to trade off one design choice against another, or to imagine the consequences of a feature change.

Computer science studies the analytical problems, and theorists of industrial organization focus on social problems. The cognitive problems of producing software are less well understood – stay tuned.

--------------------------------

[1] The suitability of the word “cognitive” is debatable. Since all problems involve human thought, they’re all cognitive in some sense. However, the complications of cognitive capacity and bias strike me as different in kind from social and analytical constraints, so I’ll use the term “cognitive problems” until I find a better one.

[2] Halford, G. S., Baker, R., McCredden, J. E., & Bain, J. D. (2004). How many variables can humans process? Psychological Science, 16, 70-76.

[3] See Alexander Todorov (1997) , “Another Look at Reasoning Experiments: Rationality, Normative Models and Conversational Factors” Journal for the Theory of Social Behaviour 27 (4), 387–417. See also Judgment Under Uncertainty, UCSB Center for Evolutionary Psychology (Cosmides & Tooby?)

[4] Albert Einstein, from J. Bernstein, "The Secret of the Old Ones, II." New Yorker, March 17, 1973, cited in http://chem.tufts.edu/AnswersInScience/What-Is-Science.htm

Monday, January 29, 2007

The data dimension

The network-centric way of looking at the web has been bugging me for a while, but I’m still struggling to come up with an alternative to this analysis. I suspect the answer is a data-centric view. The network is already pretty pervasive, so “ubiquity” isn’t news. Moore’s Law is driving sophisticated packet processing into the core of the network, so “smarts” isn’t quite it, either. What is most important about its current development is that reach and processing are being leveraged by complex new content: the data dimension.

I’ve argued that one can see the IT developments of the last 25 years as three waves of commoditization:

80’s: Commodity Computing
90’s: Commodity Communications
00’s: Commodity Data
The current view of networking is packet- and processor-centric. A packet-centric view buys into the end-to-end worldview, where the network is dumb, and all the interesting stuff happens outside it, on the edge. This is of course not the case; cf. in-transit virus scanning, large data caches (Akamai), in-network codec transcoding, and deep packet inspection. The processor-centric view adds this edge and core computing to the picture. It assumes that packets flow largely unchanged across lines of communication between nodes. What matters are the lines and the nodes, and not what flows over and between them.

This approach under-values the importance of massive, distributed, evolving data sets. I’m not saying that processors and packets are unimportant. The three decades of commoditization I list are cumulative: one needs commodity computing and commodity communications to get the real value of commodity data. But the novelty is in the data, and that’s where the hardest new questions of policy and governance will arise.

I haven’t thought through all the implications of the data-centric view. But to get past the old mental image of glowing tubes connecting pulsing processors [1], imagine yourself as a data file:

You start life, let’s say, on Alice’s computer. You’re small and lonely, and when you look out at the world, all you see is Alice’s computer.

Suddenly you feel a little bigger, and you can see another computer: Bob’s. Alice has sent a copy of you to Bob. Now you’re a little bigger.

In fact, if you paid attention, there was a blink when the world seemed to flash by. the world temporarily got bigger – the flicker of many router processors as you passed over the network from Alice to Bob. You got briefly bigger as copies of you were cached along the way.

After a while you find yourself in a much bigger world; Bob copied you up to YouTube, and computers flicker in and out of existence around you as Youtube users download and watch you. You don’t feel much bigger, because nobody’s copying you. However, occasionally you grow and start morphing, as someone takes your original self and mashes it into a new version.

Then your world changes again. You can now see many PCs; Charlie got hold of you, and posted a bittorrent tracker to you.

More parts of you start morphing and growing as people incorporate you into other data files. You still feel as if these new existences are part of you, but they’ve also started to change on their own. Each of them opens up windows onto new host machines.
If you can’t resist thinking in terms of nodes and links (and who can, in this age of network theory), imagine that the nodes are data sets, and the links are semantic and genealogical connections. The meaning is in these nodes and links – not those boring old fiber links and CPUs.
------------------

[1] Since I’m pre-occupied with mental models, I can’t resist pointing out how pervasive the “communications is stuff flowing in pipes” metaphor is. It’s not just poor Sen. Stevens; the term “data flow” occurs 1.2 million times on the web, according to Google. I also don’t think it’s a coincidence that a dominant, though increasingly discredited, folk theory of language meaning is the conduit metaphor.

Sunday, January 21, 2007

Too complex for humans

Bob Stein has described some of the digital media he’s working on as “too complex for humans” (personal communication, 19 Jan 07). This wonderful expression beautifully captures the essence of what I’m struggling to understand in the hard intangibles project.

Bob speculated that two of the pioneering experiments being done by his Institute for the Future of the Book may fall in this category: GAM3R 7H30RY, a networked book by McKenzie Wark, and Operation Iraqi Quagmire, an annotated version of the Iraq Study Group Report. These “high dimensional” artifacts may become so complex that they are beyond our ability to grasp.

One can certainly think of each content type in these linked conversations as a dimension (e.g. base text, comments, commenters, time), and we clearly can’t think intuitively about spaces with more than three spatial dimensions. I suspect that there’s even more going on than high dimensionality, though: hyper-linking creates texts with topologies that are qualitatively different from most of the books we’ve known to date.

Topology refers to the interconnectedness of structures, regardless of their exact shape and size. It’s the difference between a cup and a glass: the one has handle you can hook your finger through, and the other doesn’t. In topological terms, any glass can be transformed into any other by squishing and stretching, but it can’t be turned into a cup without tearing. Topologically, a glass is equivalent to a plate, or a knife, or a paper clip. As for cups, there’s the old joke, “What is a topologist? Someone who cannot distinguish between a doughnut and a coffee cup.” (Renteln and Dundes 2005, cited in MathWorld.)

A traditional linear text is like a glass or a plate: one can condense it without losing its narrative essence. While some cases like Reader’s Digest or Cliff Notes are straightforward, others are anything but trivial. For example, Tribonian’s 5th Century achievement of boiling down a vast and contradictory corpus of centuries of Roman law into a usable legal code stood for centuries after his death, and was a pillar of the Byzantine Empire. (For a wonderful survey of the socio-political context, cf. Lars Brownworth’s first lecture (MP3) on the Emperor Justinian in his series 12 Byzantine Rulers.)

Sometimes the précis of a large body of law is presented as a commentary, as in the acclaimed work of 11th Century Jewish scholar Rashi. While much of his work was interpretation, and could be seen as a hyperlinked commentary, his explication of the Talmud and Tanach made it understandable to lay people and is used to this day.

Tribonian’s work condensed a large text into a shorter one. Rashi provided links from canonical texts to his explanations, which could be used in place of the original. Topologically both scholars were doing similar things. Condensing a text is just squeezing down its substance. If one imagines links from a text to a commentary as a nodule growing from a plane, then replacing source text with a comment is squishing the nodule into the plane without changing the topology.

Hyper-linked conversations, for example where a text comments on an earlier comment to the text itself, are different in kind. A series of links that eventually loops back to its starting point is like the handle on a cup; it cannot be removed without tearing the structure of the resulting text. A loop-back topology in texts is closely tied to change over time. An evolving text that takes references to earlier versions of itself into account leads to loops. A notable characteristic of digital texts is that they can change rapidly. For example, a wikipedia entry may change minute by minute, but one has to consult the “history” tab to see it. Bob Stein has observed that many people are disturbed by the lack of a fixed reference in digital texts. He’s wondering whether we may just have to give up a notion of permanence in these media (personal communication, 18 Jan 07).


While large volatile loop-back texts may be too complex for humans, one shouldn’t ignore the consequences of scale, even in topologically simple texts where each comment builds sequentially on earlier ones. James Elkins discusses “monstrous artworks”: works that have attracted so much attention that they have effectively outgrown the discipline of art history (Why are our Pictures Puzzles, 1999). Their literature can no longer be mastered by a single scholar, or judiciously discussed in a single volume, or taught in a year-long seminar. Examples include The Brancacci Chapel, the Mona Lisa, and Velasquez’s Las Meniñas). Bob Stein has wondered what the 50 page Communist Manifesto would look like today if it were published as a website with comments, and about how one might represent the gigantic hyper-text of the Manifesto plus all the commentaries that link to it, and among themselves.

The loop-back topology I’m describing is not new. It’s inherent in any conversation, and traditional linear texts contain complex forward and back and backward references that can make them difficult to grasp. What may be new, though, is the scale of the artifact (there is no limit on the number of pages in a digital text), explicitness of the links (which make a false promise of intelligibility), and the instability of reference when a digital text is constantly and invisibly updated.

Sunday, January 14, 2007

Four Decades of Free

Peter Rinearson has pointed out two major milestones in computing: the mid-1980s when computing became free, and the mid-1990s when communications became free. (Not exactly free, of course, but so cheap that they were no longer limited to the elite.) He set me thinking about what happened since 1995, and what may be coming down the pike.

Arguably, the mid-2000’s were marked by data “becoming free.” Google’s web search is predicated on massive data stores, which would not be possible without very cheap data storage. The Bloggers, MySpaces and, especially, YouTubes of the web depend on being able to host data without charging users.
1980’s: Free Computing
1990’s: Free Communications
2000’s: Free Data

So far I’ve taken “free” to be “free as in beer, not free as in speech”. Both the Rinearson milestones also had non-commercially liberating consequences. The Open Source movement, for example, which was fuelled by cheap, fast global communications between people using powerful home computers.

The argument over digital copyright follows from Free Data: filling up the gigabytes of storage on an iPod is much easier when you don’t have to pay for all the songs. Similarly, the DIY culture of making your own video is enabled by YouTube offering free data hosting.

Prognosticating, I expect that the 2010s will be marked by Free Stuff: the ability to have any material thing you want produced instantly and at negligible cost. A teen will design their cell phone the way they create their MySpace page. (The phone might even by sponsored by MySpace...) Nanotechnology will at large emerge from the hype cycle. We’ll manipulate both inorganic and genetic material at will: mecha-nano and bio-nano.
2010’s: Free Stuff
The dark cloud looming behind Free Everything is its environmental cost. Materialism is part of human nature, and everybody on the planet naturally wants the surfeit of things the Developed World has pioneered. According to New Scientist, if the 5 billion-plus people in developing nations matched the consumption patterns of the 1.2 billion in the industrialized world, at least two more Earths would be needed to support everyone (Sidebar "Apocalypse Soon?" vol. 191, # 2571, 30 Sep 2006, p. 50).

The “dark under-belly of Moore’s Law” is that electrical consumption has increased rapidly, along with processing power. There is talk of skyrocketing energy bills in data centers. Free Stuff could make matters worse – but could also help if it’s designed from the start to be sustainable. Demanufacturing will be essential, as will the can-do attitude of groups like worldchanging.

Tuesday, January 09, 2007

Hard Scientific Problems

Much discussion of problems concerns what I defined as social and cognitive problems, e.g. wicked problems and bounded rationality. Writers like Nancy Roberts and Jerry Talley seem to dismiss analytical problems by categorizing them as “simple problems” or “basic problems” (see Kinds of Hard (ctd.) ).

To show the depth and range of analytical problems, I mined New Scientist's special anniversary feature in which “Brilliant minds forecast the next 50 years.” Some contributors explicitly posed questions, though most talked about hoped-for breakthroughs. In cases where I was able to do so, I have recast breakthroughs as questions.

The resulting list is great for staring at, and mulling over. A few questions were posed by multiple people: the nature of consciousness, life beyond the earth, details of the big bang, and dark matter. Looks like there’s most consensus among astrophysicists ... Counting the different categories of question shows a very operational/instrumental approach to science: Existence: 7; How: 19; What: 20; Why: 2. The most common preoccupations, even for big brains thinking 50 years out, were What and How questions. Very few were interested in Why...

Are the laws of physics unique and was our big band the only one? (Martin Rees)

Can we truly understand how we understand others, their intentions, desires and beliefs? (Michael Gazzaniga)

Do we mostly get along because we enjoy common reactions to similar challenges? (Michael Gazzaniga)

Does our species have a moral compass? (Michael Gazzaniga)

How are molecular aspects of memory are influenced by society and culture? (Daniel Schacter)

How are neural nets in the brain stitched together to produce mental activities that are familiar in cognitive psychology? (Dan Dennett)

How can children reliably name the class of an object? (That is, the generic object recognition problem.) (Rodney Brooks)

How can one efficiently and conveniently store a young woman's ovarian tissue or eggs to be used years later? (Carl Djerassi)

How can one visualise the connections between human organisations and technological objects? (Bruno Latour)

How can our currently fragmented theory of the physical world be synthesized into a coherent way of thinking? (Carlo Rovelli)

How can so few genes (relatively speaking) create so much complexity in the human brain? (Antonio Damasio)

How can we disentangle the feedback loop between brain development and the ancient primate tendencies that shape our societies? (Frans de Waal)

How did cooperative behavior evolve? (Robert May)

How did elementary particles acquire their mass? (Lisa Randall)

How did human spoken language evolve? (Irene Pepperberg)

How did our memory systems evolve? (Daniel Schacter)

How do human institutions work, in particular, what are the impediments to collective, cooperative activity in which all individuals pay small costs to reap large group benefits? (Robert May)

How does the brain create consciousness? (Igor Aleksander, Terry Sejnowski, Susan Greenfield)

How does evolution work in fine detail? (Bernard Wood)

How does the environment of early childhood shape how people interact with the world in which they grow up, live and work? (Michael Marmot)

How does the mind translate environmental threats into the body's stress reactions? (Michael Marmot)

How have biological and physical processes interacted over billions of years to bring us to our own evolutionary moment? (Andrew Knoll)

How long can humans live? (Francis Collins)

Is the entire universe very big but everywhere the same, or is our patch part of a “multiverse” with a great many environments, each with its own laws, particles and constants? (Leonard Susskind)

Is there life beyond earth? (Colin Pillinger, Carolyn Porco, Freeman Dyson, Monica Grady, Piet Hut, Steve Squyres)

Is searching for solutions harder than checking that the solutions are correct? (That is, the P=NP problem.) (Timothy Gowers)

To what degree are genetic and morphological change are correlated with speciation? (Niles Eldridge)

What are the causes of the major psychiatric disorders – schizophrenia, depression, bipolar disorder, and anxiety disorders – and how are they best treated? (Charles Nemeroff)

What are the equations describing the unified physical laws of the universe? (Max Tegmark)

What are the links between genes and morphology, between animals and behaviour, and between behaviour and life history strategies? (Alan Walker)

What are the molecular pathways that render cells from long-lived animals resistant to many forms of injury? (Richard Miller)

What are the several missing links on the march between the specifications of genes, and the neural structures and operations which support behavior and cognition? (Antonio Damasio)

What happened during the first second of the big bang, and before? (Sean Carroll, Kip Thorne)

What is a general theory of imagination, consciousness and self that will be powerful and illuminating, and applicable in principle to sentient species everywhere? (Oliver Sacks)

What is life? (Paul Nurse)

What is the connection between our understanding of the function of individual brain cells or pairs of cells and of the larger-scale cognitive functions processed by multiple brain areas? (Fred Gage)

What is the general theory of what, according to the laws of physics, can or cannot be built and with what resources? (That is, the quantum theory of construction) (David Deutsch)

What is the geographic distribution of biodiversity at the species level (Edward O Wilson)

What is the mysterious dark matter that appears to make up about 25 per cent of the mass of the universe? (Lawrence Krauss, Arthur McDonald)

What is the neural basis of individual differences in memory? (Daniel Schacter)

What is the role and importance of extinctions in shaping subsequent evolution? (Niles Eldridge)

What is the role of particular cells, circuits and genes in memory formation and retrieval? (Daniel Schacter)

What is the role of isolation in evolution? (Niles Eldridge)

What is the structure of the tree of life? (Michael Benton)

What is the template that describes the distribution of the prime numbers? (Marcus du Sautoy)

What is the underlying nature of matter? (Lisa Randall)

What is the unified description of the (superficially) different forces of nature, and (superficially) different building blocks of matter? (Frank Wilczek)

Why is the weak interaction weak? (Frank Wilczek)

Why is nature such that we have a description that is so enormously successful, yet so counterintuitive? (That is, resolving the paradoxes of quantum mechanics.) (Anton Zeilinger)

Saturday, January 06, 2007

If you can't take the heat

It was as if I’d grown up in a hotel, always eating food set before me. One evening I dreamed up a meal, but there was no one to ask for it.

So I went in through the doors where the food came out, and found a shining kitchen with ranks of surfaces and impenetrable implements, and the pantries full of mysterious raw materials.

I didn’t know frying from baking, and couldn’t recognize a grill. I couldn’t connect an egg to an omelet, or a clove of garlic to its taste. Flour looked nothing like bread, and what did one do with an onion?

Suddenly the kitchen was crowded with frenzied cooks, no time to explain, and in any case, just learning one station took years, who was this naïf who wanted to walk in and cook a menu?

Then I realized that there wasn’t one kitchen, but many – if I’d known their names, I would’ve recognized Asian, French, Indian, Italian – each with its own staff, techniques, and secret ingredients. If I wanted to apprentice myself, I’d have to pick one; but which, if any, could make the food I wanted?

This is what it feels like right now, trying to figure out how to make progress on the Hard Intangibles project.

Friday, January 05, 2007

Disconnected understanding

An explanation for a very rare mental disorder has helped me think about how understanding works. A sufferer of Capgras’ Syndrome believes that people they know very well aren’t who they appear to be; they’re perfect doubles who are impersonating them. One explanation is that there’s a disconnect between two necessary elements of the visual recognition system. One system does the pattern matching, and the other provides the emotional texture associated with what’s recognized. In a Capgras patient, the recognition works but the emotional texture is missing. The patient rationalizes the combined experience by confabulating that the person seen must be a physically identital imposter. They know something doesn’t feel right, and this is the best available explanation.

My experience doing quantum chromodynamics (QCD) was faintly like this. I could do the math, but it was mechanical; I always felt that the theory had a better idea of the answer than I did. On the other hand, I sensed that people who could really do physics had a visceral feel for the subject. Not only could they handle the sums, but they intuited what the answer should be.

This may be part of what skill means. One has to master both the mechanics of the knowledge, and the link between the knowledge and the intuitions that allow one to find short cuts and anticipate results. Mechanical knowledge links subject matter and theory; metaphor links theory and intuition.

In the Capgras case, the visual recognition system links a person’s appearance with their identity, and the emotional affect system links that identity with the meanings that are important to the perceiver.

To sum up the three preceding examples:
Skill in general: subject matter – theory // theory – intuition
Physics ‘n’ me: experiment – QCD // QCD – intuition
Capgras syndrome: face – spouse’s name // spouse’s name – feelings for spouse
Stretching the analogy even further, perhaps this is why metaphors are so important in science and engineering. A software engineer doesn’t need metaphors like Classes-as-Containers or Programs-as-Language to get work done– the unadorned mathematics would do just fine – but analogies provide an intuitive basis that underpins thinking and provides comforting context. When a programmer tries to write about software without using metaphors, they become uncomfortably aware just how ingrained they are. The two-part model in this case:

mathematics – programming // programming – intuition

Monday, January 01, 2007

Kinds of Hard (ctd.)

Some follow-ups on the earlier post:

Nancy Robertsapproach is essentially social, and based on a problem/solution dichotomy:

  • simple problems: consensus on problem definition and solution
  • complex problems: agreement on problem definition, but not solution
  • wicked problems: no agreement on either definition or solution
In this approach, the amount of conflict increases as one moves from one type to another.

Jerry Talley defines eight problem types which he groups into three clusters (for more details, see intro, overview with issues, definitions with examples)
  • basic problems: characterized by confidence in existence of solution, but lack time or key pieces of information to solve
  • mysterious problems: characterized by confusion, complexity, and hidden dynamics
  • dangerous problems: characterized by disagreement and conflict
Roberts and Talley both focus on organizational dynamics, and thus understate the difficulty of some scientific problems, which they would characterize as “simple” or “basic,” respectively.

Since we’re in a Rule of Three Mode, here’s my working model:

1. Analytical problems

These are problems which can be stated clearly, e.g. problems in science and mathematics. Answering them may relatively easy, as in puzzles, or hard in various degrees, such as solving equations in multi-dimensional quantum field theories. Roberts’ “simple problems” and Talley’s “Puzzles” (in the “basic problem” cluster) fall into this category.

2. Social problems

Interaction between people is the complicating factor here. Arguably, all the problems considered by Roberts and Talley are social problems. “Wicked problems” fall in this class.

3. Cognitive problems

These problems are also hard because of human nature, but in this case it’s a problem in the individual. They include the phenomena studied under the heading of cognitive biases, behavioral finance, neuroeconomics, etc.

These three are attribute classes, rather than categories; a problem won’t fit neatly into a class. Very hard problems will have aspects of all three. For example, conflicts in social problems are often exacerbated because each participant has their own confirmation bias. Scientific controversies often combine analytical questions with conflicts over what counts as a valid solution, a characteristic of social problems.