"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Friday, December 29, 2006
Is beautiful software always best?
I had a fascinating conversation with Adam Sapek on Wednesday about software, aesthetics, and mental models. He posed this question: Do we use elegance (say) as design criterion because it necessarily leads to better software, or simply because it helps us to think about it better?
Aesthetic criteria seem to help humans make better software [1]. It is quite possible that a sense of what’s beautiful in software is common among all humans, just as there seems to be a neurological basis for visual aesthetics [2].
But what about software is developed by aliens, e.g. computers themselves? Genetic algorithms, for example, might yield a better solution, but one that is “ugly” in human terms. Genetic algorithms sometimes come up with “weird” and “counter-intuitive” solutions that are better than ours [3], such as unusual, highly asymmetric orbit configurations for multi-satellite constellations that minimize losses in coverage; unusually shaped radio antennas (illustrated above); and a solution to a problem in cellular automata theory that not only outperformed any of the human-created solutions devised over the last two decades, but that was also qualitatively different.
As Steven Vogel argues in Cats’ paws and catapults, natural design and human engineers arrive at very different solutions to the same problems, while relying on similar engineering principles. To pick a few examples: Right angles are rare in nature, but common in human technologies; nature builds wet and flexible structures, we go for dry and stiff; nature’s hinges mainly bend, and ours mainly slide.
It may turn out that our instinctive aesthetics for software will diverge from the systems on which our systems run. For example, concurrent programming seems to be very hard for humans; our criteria for “good software” may not work very well for many-processor architectures. We may be able to build tools that come up with good solutions, but we won’t have an intuitive grasp of why they’re good. Computer science may then find itself in a crisis of intelligibility like the one physics encountered over the interpretation of quantum mechanics a century ago.
--------
[1] Don Knuth was an early proponent of aesthetics in software, as in his “literate programming” initiative. According to Gregory Bond in “Software as Art,” Communications of the ACM, August 2005/Vol. 48, No. 8, p. 118, Knuth identified the following as properties of software that inspire pleasure or pain in its readers: correctness, maintainability, readability, lucidity, grace of interaction with users, and efficiency. Charles Connell proposes that all beautiful software has the following properties: cooperation, appropriate form, system minimality, component singularity, functional locality, readability, and simplicity.
[2] Zeki and Kawabata (Journal of Neurophysiology 91: 1699-1705, 2004) found using functional MRI scans that the perception of different categories of paintings (landscapes, still lives, portraits) are associated with distinct and specialized visual areas of the brain, and that parts of the brain are activated differently when viewing pictures subjects self-identified as beautiful or ugly. For more papers on neuroesthetics, see http://www.neuroesthetics.org/research/index.html.
[3] These examples were taken from Adam Marczyk’s Genetic Algorithms and Evolutionary Computation. It provides a useful inventory of GA applications, seemingly unaffected by polemical goal of the paper to rebut claims of creationists about genetic algorithms and biological evolution.
RSS feed for my factoids site
If you wish to subscribe to Factoids Factoids, here's the link: http://factoidsfactoids.spaces.live.com/feed.rss
Some recent examples:
Eighty percent of owners buy holiday and birthday gifts for pets
Pet-item sales and services have the second-fastest growth for U.S. retailers after consumer electronics
Humans sequester one quarter to one half of all net terrestrial primary productivity to our use
I typically post one factoid a week, but it's bursty . . .
Some recent examples:
Eighty percent of owners buy holiday and birthday gifts for pets
Pet-item sales and services have the second-fastest growth for U.S. retailers after consumer electronics
Humans sequester one quarter to one half of all net terrestrial primary productivity to our use
I typically post one factoid a week, but it's bursty . . .
Thursday, December 28, 2006
A simple hard problem
The Royal Society recently podcast the 2005 President's Address by Lord May of Oxford (PDF). He argues that the key obstacle to solving vital problems facing the planet like global warming and the loss of biodiversity is the collective action dilemma: when everybody has to contribute a little to obtain a large collective benefit, it's in each individual's immediate best interest to cheat.
According to May, the reason why cooperative behavior evovled and is maintained is the most important unanswered question in evolutionary biology. We can see why in many countries' approach to climate change: they are not willing to incur the penalty of lost growth unless there is a guarantee that everyone will act in concert.
The politics of climate change or biodiversity is fiendishly complicated, and may merit the "wicked problem" label. However, the underlying issue is very simple: altruism is not in anyone's immediate self-interest.
Tuesday, December 26, 2006
Kinds of Hard
There are two kinds of people in the world: those who divide everything into two categories, and those who . . . um . . . you know . . . Well, I think it’s funny . . . . I guess I’ve been watching too much Daily Nut, where pretty witty young things start every show with jokey chit-chat.
Back to regular programming: My hypothesis is that the hardest problems are changing as we build an increasingly intangible world. The question is, what makes a problem hard? So as usual when I’m stuck, here’s a taxonomy.
Let’s break problem solving into three stages:
If you know of situations that don’t fit into this scheme, Dear Reader, please let me know. I’ll be looking out for them, too.
1. Framing
In the easiest case, a problem can be clearly stated (e.g. game rules). It gets harder when the problem statement evolves as the solution emerges (e.g. wicked problems). There are cases where it’s not agreed that a problem exists (e.g. global warming until recently). And then there are cases where you don’t even know there’s a problem (e.g. Donald Rumsfeld’s “unknown unknowns”).
2. Solving
Some problems are easy to solve, like tic-tac-toe. Others have multiple solutions (e.g. teaching kids to read). Even mathematically hard problems can have millions of solutions (e.g. superstring theory). Many problems are soluble, but may or may not be hard (e.g. optimization). Some cannot be solved at all (e.g. calculating Omega, Chaitin’s halting probability). In some cases it’s not clear how to even start solving the problem, let alone what path to follow (e.g. speculative software projects). In others, humans think they instinctively know the answer, but in fact get it wrong (e.g. mistakes we make interpreting statistics).
3. Validating
Checking the acceptability of a solution may be a trivial mechanical matter even though finding the solution in the first place is hard (e.g. factorizing primes). Some problems may have many candidate solutions, but no way to know in advance which one is right, or best (e.g. improving the situation in Iraq). There may be no way of knowing whether an action has solved a problem (e.g. building a road to address congestion). There may be no agreement in a community about whether what’s been presented is a valid solution (e.g. the computer proof of the four-color theorem).
Some other approaches
There are many ways of categorizing problems, and I’m just beginning my collection.
Nancy Roberts works in the “wicked problem” tradition and defines three types of problems in “Wicked Problems and Network Approaches to Resolution” (International Public Management Review Vol. 1, No. 1, 2000):
In computer science, problem hardness is often measured by the time it takes to reach a solution as a function of the problem size, n. If the time taken is a polynomial function of the problem size, e.g. order(n) or order(n^3), the problem is said to be in the complexity class P. There is a hierarchy of such classes. Problems in the EXPTIME class are solvable by a deterministic Turing machine in time order(2^p(n)), where P(n) is a polynomial function of n. P is a proper subset of EXPTIME. The class NP contains decision problems where solutions can be verified (but not necessarily found) in polynomial time: P {subsets} NP {subsets} EXPTIME. The hardest problems in NP are NP-complete. For many NP-complete problems, typical cases are actually easy to solve; Cheeseman, Kanefsky and Taylor showed that such problems can be summarized by an “order parameter,” and that the hard problems occure at a critical value of such a parameter (“Where the Really Hard Problems Are,” Proceedings of the IJCAI-91).
Coda
As always, many thanks to the many people who’ve helped me think about this topic. In this case I’m particularly grateful to Miladin Pavlicic for helping me understand some different kinds of software project challenges.
Back to regular programming: My hypothesis is that the hardest problems are changing as we build an increasingly intangible world. The question is, what makes a problem hard? So as usual when I’m stuck, here’s a taxonomy.
Let’s break problem solving into three stages:
- Framing the problem
- Solving it
- Validating the solution
If you know of situations that don’t fit into this scheme, Dear Reader, please let me know. I’ll be looking out for them, too.
1. Framing
In the easiest case, a problem can be clearly stated (e.g. game rules). It gets harder when the problem statement evolves as the solution emerges (e.g. wicked problems). There are cases where it’s not agreed that a problem exists (e.g. global warming until recently). And then there are cases where you don’t even know there’s a problem (e.g. Donald Rumsfeld’s “unknown unknowns”).
2. Solving
Some problems are easy to solve, like tic-tac-toe. Others have multiple solutions (e.g. teaching kids to read). Even mathematically hard problems can have millions of solutions (e.g. superstring theory). Many problems are soluble, but may or may not be hard (e.g. optimization). Some cannot be solved at all (e.g. calculating Omega, Chaitin’s halting probability). In some cases it’s not clear how to even start solving the problem, let alone what path to follow (e.g. speculative software projects). In others, humans think they instinctively know the answer, but in fact get it wrong (e.g. mistakes we make interpreting statistics).
3. Validating
Checking the acceptability of a solution may be a trivial mechanical matter even though finding the solution in the first place is hard (e.g. factorizing primes). Some problems may have many candidate solutions, but no way to know in advance which one is right, or best (e.g. improving the situation in Iraq). There may be no way of knowing whether an action has solved a problem (e.g. building a road to address congestion). There may be no agreement in a community about whether what’s been presented is a valid solution (e.g. the computer proof of the four-color theorem).
Some other approaches
There are many ways of categorizing problems, and I’m just beginning my collection.
Nancy Roberts works in the “wicked problem” tradition and defines three types of problems in “Wicked Problems and Network Approaches to Resolution” (International Public Management Review Vol. 1, No. 1, 2000):
- Simple: there is consensus on a problem definition and solution
- Complex: although problem solvers agree on what the problem is, there is no consensus on how to solve it.
- Wicked: problems with four characteristics – no consensus on the definition of the problem; a vast and diverse group of stake holders; constraints on the solution are constantly shifting; and no consensus on the solution of the problem.
In computer science, problem hardness is often measured by the time it takes to reach a solution as a function of the problem size, n. If the time taken is a polynomial function of the problem size, e.g. order(n) or order(n^3), the problem is said to be in the complexity class P. There is a hierarchy of such classes. Problems in the EXPTIME class are solvable by a deterministic Turing machine in time order(2^p(n)), where P(n) is a polynomial function of n. P is a proper subset of EXPTIME. The class NP contains decision problems where solutions can be verified (but not necessarily found) in polynomial time: P {subsets} NP {subsets} EXPTIME. The hardest problems in NP are NP-complete. For many NP-complete problems, typical cases are actually easy to solve; Cheeseman, Kanefsky and Taylor showed that such problems can be summarized by an “order parameter,” and that the hard problems occure at a critical value of such a parameter (“Where the Really Hard Problems Are,” Proceedings of the IJCAI-91).
Coda
As always, many thanks to the many people who’ve helped me think about this topic. In this case I’m particularly grateful to Miladin Pavlicic for helping me understand some different kinds of software project challenges.
Monday, December 25, 2006
Happy Holidays, Dear Reader
Thank you for reading, and thank you for your feedback, both in the blog comments, and privately.
May you and your loved ones be healthy, happy, and peaceful
P.
May you and your loved ones be healthy, happy, and peaceful
P.
Sunday, December 24, 2006
Scoundrels really are dirty
Wash away your sins. An era of moral decay. Keep your nose clean. Corruption. There are endless examples of metaphors equating morality with cleanliness, and vice with filth. Business school researchers at Toronto and Northwestern have found that this is not just a matter of language; being associated with something immoral leads to an urge to physically clean oneself.
Zhong and Liljenquist have conducted three studies of the “Macbeth effect,” the need to cleanse oneself after a threat to one’s moral purity (Science 8 September 2006: Vol. 313. no. 5792, pp. 1451 – 1452; abstract). New Scientist of 7 September 2006 describes the experiments.
There seems to be an active mental mapping between morality and cleanliness. Ethics is abstract, and we activate our corporeal instincts when thinking about it. Perhaps this is the only way we can think of morality in an extensive way. If that’s true, then ethical concepts that can’t be modeled physically cannot exist.
I don’t know if one can disprove this hypothesis. A candidate concept has evidently been thought of; it then remains to show that there is no physical correlate. However, language is pliable enough that one will always be able to draw link to a bodily metaphor. Whether this is persuasive will be a subjective judgment. Perhaps neural mapping and brain imaging will eventually be able to help.
Zhong and Liljenquist have conducted three studies of the “Macbeth effect,” the need to cleanse oneself after a threat to one’s moral purity (Science 8 September 2006: Vol. 313. no. 5792, pp. 1451 – 1452; abstract). New Scientist of 7 September 2006 describes the experiments.
There seems to be an active mental mapping between morality and cleanliness. Ethics is abstract, and we activate our corporeal instincts when thinking about it. Perhaps this is the only way we can think of morality in an extensive way. If that’s true, then ethical concepts that can’t be modeled physically cannot exist.
I don’t know if one can disprove this hypothesis. A candidate concept has evidently been thought of; it then remains to show that there is no physical correlate. However, language is pliable enough that one will always be able to draw link to a bodily metaphor. Whether this is persuasive will be a subjective judgment. Perhaps neural mapping and brain imaging will eventually be able to help.
Tuesday, December 19, 2006
Known/Understood/Intelligible
I suspect that the most important thinking happens at the edge of intelligibility. It would help to define terms. As a first step, I’d mark a few points on the continuum between clear understanding and incoherent perplexity.
I’ve found it useful to think in terms of known / understood / intelligible. Each of these has three states:
Some things are not knowable, in terms of a given system of thought. For example, Heisenberg’s uncertainty principle in quantum mechanics states that increasing the measurement accuracy of a particle’s position increases the uncertainty of a simultaneous measurement of its momentum. If its position is exactly known, its momentum is unknowable.
Something is not understood when there isn’t a compelling answer to a “Why question.” When an explanation is persuasive, a phenomenon is deemed to be understood. We understand why the sun moves through the sky because we accept an explanation about planetary motion that involves Newtonian mechanics and a disposition of the sun and the earth. Understanding is a matter of argumentation; it’s subjective. At least one human is necessarily involved, and usually a community decides whether it understands a process, that is, whether the explanation meets the standards of that community.
When the terms of an explanation exceed one’s grasp, something is not understandable. Religious mysteries fall into this category; the Holy Trinity is not understandable in logical terms. A more mundane version occurs when an individual or group doesn’t have the contextual knowledge that supports an explanation; string theory is not really understandable to those without the requisite knowledge of advanced mathematics. However, I will place such cases in the category “not intelligible.” (Agreed, this taxonomy isn’t water-tight.)
By intelligible I mean something that can be apprehended in general terms. One may not grasp all the steps of an explanation, but the overall shape is familiar. Any book written in English is to some extent intelligible to an English speaker. (Translations of French postmodernists don’t count.) If something is not intelligible, not only do you not understand it – you’re not even sure what the topic of discussion is. Many intellectuals might find it unintelligible that Francis Collins, leader of the human genome project, is both a devout Christian and a scientist.
When there is no possibility of making something intelligible, it’s “not intelligibuble”. Philosopher of mind Thomas Nagel famously argued that because consciousness has an irreducibly subjective component, we will never know what it’s like to be a bat. That experience is not intelligibuble to a human.
I’ve found it useful to think in terms of known / understood / intelligible. Each of these has three states:
- Known, not known, not knowable
- Understood, not understood, not understandable
- Intelligible, not intelligible, not intelligibuble
Some things are not knowable, in terms of a given system of thought. For example, Heisenberg’s uncertainty principle in quantum mechanics states that increasing the measurement accuracy of a particle’s position increases the uncertainty of a simultaneous measurement of its momentum. If its position is exactly known, its momentum is unknowable.
Something is not understood when there isn’t a compelling answer to a “Why question.” When an explanation is persuasive, a phenomenon is deemed to be understood. We understand why the sun moves through the sky because we accept an explanation about planetary motion that involves Newtonian mechanics and a disposition of the sun and the earth. Understanding is a matter of argumentation; it’s subjective. At least one human is necessarily involved, and usually a community decides whether it understands a process, that is, whether the explanation meets the standards of that community.
When the terms of an explanation exceed one’s grasp, something is not understandable. Religious mysteries fall into this category; the Holy Trinity is not understandable in logical terms. A more mundane version occurs when an individual or group doesn’t have the contextual knowledge that supports an explanation; string theory is not really understandable to those without the requisite knowledge of advanced mathematics. However, I will place such cases in the category “not intelligible.” (Agreed, this taxonomy isn’t water-tight.)
By intelligible I mean something that can be apprehended in general terms. One may not grasp all the steps of an explanation, but the overall shape is familiar. Any book written in English is to some extent intelligible to an English speaker. (Translations of French postmodernists don’t count.) If something is not intelligible, not only do you not understand it – you’re not even sure what the topic of discussion is. Many intellectuals might find it unintelligible that Francis Collins, leader of the human genome project, is both a devout Christian and a scientist.
When there is no possibility of making something intelligible, it’s “not intelligibuble”. Philosopher of mind Thomas Nagel famously argued that because consciousness has an irreducibly subjective component, we will never know what it’s like to be a bat. That experience is not intelligibuble to a human.
Monday, December 11, 2006
A lower bar
Some kind readers noticed that this blog is back after a hiatus. (There are readers!) Roy Peter Clark’s advice helped me get going again. In "Writing Tool #33: Rehearsal," he quotes the poet William Stafford:
In short: If you have writer’s block, your standards are too high.
When I got stuck last year, I was helped by Anne Lamott’s advice in Bird by Bird to just bang out “a shitty first draft.” This time around the problem has been getting to a half-decent second draft. Until I figure that out, I’ll take William Stafford’s advice and keep lowering the bar. Sometimes, especially on the web, the shitty first draft just has to make its own way in the world.
I believe that the so-called "writing block" is a product of some kind of disproportion between your standards and your performance ... One should lower his standards until there is no felt threshold to go over in writing. It's easy to write. You just shouldn't have standards that inhibit you from writing.
In short: If you have writer’s block, your standards are too high.
When I got stuck last year, I was helped by Anne Lamott’s advice in Bird by Bird to just bang out “a shitty first draft.” This time around the problem has been getting to a half-decent second draft. Until I figure that out, I’ll take William Stafford’s advice and keep lowering the bar. Sometimes, especially on the web, the shitty first draft just has to make its own way in the world.
Saturday, December 09, 2006
Testing theories of architectural intelligibility
Architecture involves designing spaces that are intelligible: discerning the purpose of a structure, being able to find the entrance, knowing how to get around. Courses in architecture aim to impart the theory and practice of designing intelligible spaces.
3D simulation software is commonly used to construct building models, and can be used to test whether people moving through them can, in fact, make sense of them.
Rather than just test the building for intelligibility, one can test the underlying theories by creating virtual spaces that instantiate them, with a “volume control”: an experimenter can adjust the degree to which a rule is implemented to find the point at which a user can now longer make sense of a building.
Think of it as usability testing meets architectural theory (feng shui, Christopher Alexander, New Urbanism, and on and on).
By the “if I can think of it, someone’s already built it” rule, it’s certain that this has already been done. If you know of examples, Dear Reader, please let me know.
The expensive way would be to use commercial architecture design packages; a quick and dirty approach could use Second Life. The challenges include (1) extracting variable-based rules from architectural design principles, and (2) building the volume control functionality.
3D simulation software is commonly used to construct building models, and can be used to test whether people moving through them can, in fact, make sense of them.
Rather than just test the building for intelligibility, one can test the underlying theories by creating virtual spaces that instantiate them, with a “volume control”: an experimenter can adjust the degree to which a rule is implemented to find the point at which a user can now longer make sense of a building.
Think of it as usability testing meets architectural theory (feng shui, Christopher Alexander, New Urbanism, and on and on).
By the “if I can think of it, someone’s already built it” rule, it’s certain that this has already been done. If you know of examples, Dear Reader, please let me know.
The expensive way would be to use commercial architecture design packages; a quick and dirty approach could use Second Life. The challenges include (1) extracting variable-based rules from architectural design principles, and (2) building the volume control functionality.
Tuesday, December 05, 2006
The edge of intelligibility
I suspect that the most important thinking happens at the edge of intelligibility, between triviality and incoherence, just as it is said that the most complex structures exist at the edge of chaos, between order and complete randomness. Bear with me while this post wanders the borderlands of intelligibility. Hopefully that means it’s a worthwhile topic...
We only argue about things that are uncertain; otherwise, there’s no point in having a debate. Argumentation not only allows participants to test their reasoning and persuade others, but can also lead to new insights. This is particularly true in complex, “wicked” problems where a question can only be grasped by attempting to fashion a solution.
If a discussion involves the question “But what do you mean by X?” it’s probably on the edge of intelligibility. Social debates thrive in this zone. For example, what does “life” mean in the phrase “Life begins at conception?” The abortion debate hinges on when organic matter becomes a human being. This is a very complex question where any answer raises question about the meaning of the term “human” (at least for non-partisans).
Or: what does “information overload” mean? There’s more information today, but are we more overloaded than our forebears? How would we know? The concept is so broad that we probably can’t measure information overload today, let alone estimate it for past generations. Adam Gopnik argues in “Bumping into Mr. Ravioli” [1] that no-time-to-meet-your-friends busyness is a very modern affliction. Samuel Pepys, a very busy man, never complains of busyness. Gopnik contends that until the middle of the nineteenth century, the defining bourgeois affliction was boredom, not frenzy. Perhaps they had information underload... regardless, both ennui and overload live under the sign of meaninglessness, and thus at the edge of intelligibility.
We oscillate between excitement and boredom because we crave both novelty and predictability. When we have predictability, we become bored and seek novelty; as soon as we’re stimulated, we become agitated and seek refuge in predictability. This experience is probably common to all animals since it’s a good heuristic for finding food and staying safe.
Likewise, we seek both perplexity and reassurance. When it swept the world, sudoku was a two-fer: a perplexing novelty. The daily news is another two-fer: a ritual reassurance that the world hasn’t changed, even as it changes. Derek Lowe points out that there are a number of news templates that are used over and over again, like How The Mighty Have Fallen, or Just Like You Thought. As we alternate between perplexity and reassurance, we skate on the edge of intelligibility. Journalists are very good at giving readers just as much information as they can deal with, and then a little more, adding a pinch of perplexity to the comfort food of understanding.
Donald Rumsfeld provided a multi-layered lesson in intelligibility during a Department of Defense news briefing on Feb. 12, 2002. Since his pronouncement is sometimes edited for clarity [3], here’s what the transcript says:
Rumsfeld’s model sums up the context in which decision makers operate. “Tops” like the Secretary of Defense not only inhabit a world of complexity and responsibility, as Barry Oshry would have it, but also live in the perpetual twilight between the known and unknown.
The cusp between competence and risk is another productive margin. Mihaly Csikszentmihalyi argues in Flow: The Psychology of Optimal Experience that we find most productive and rewarding states of being when a challenge tests but does not exceed our skill. For intellectual performances, the search for flow will take us to the edge of intelligibility.
The edge of intelligibility is at different places for different people. Someone who can grasp a theorem in trigonometry in a glance might struggle to make sense a situation on the football field; a good quarterback might have trouble reading the motives of people around a meeting table. Since there are so many cognitive competencies [2], there is always justification for anybody to feel those around them are ignoramuses, or to feel that they are out of their depth compared to the expert next to them.
The edge of intelligibility is a subjective question that involves personal expertise and communal standards. Peter Dear’s wonderful new book The Intelligibility of Nature: How Science Makes Sense of the World argues that Newton’s Principia was not considered to be valid natural philosophy by many leading scientists because it did not provide a mechanical explanation of how gravity worked. Getting the right answer wasn’t sufficient to qualify as science; it had to provide a meaningful explanation, too.
The question of intelligibility is related to my pursuit of hard intangibles, but I’m not yet sure exactly how. A problem must be recognizable as such to be tractable, and is thus intelligible to a certain extent. However, it’s hard because so much about it is difficult to grasp. Hopefully the pursuit of puzzlement will eventually lead to more clarity!
----------
[1] “Bumping into Mr. Ravioli” by Adam Gopnik. First published in the New Yorker, September 30, 2003. Reprinted in The Best American Essays 2003.
[2] See e.g. Howard Gardner’s work on Multiple Intelligences. He argues that there are seven distinct intelligences: linguistic; logical-mathematical; spatial; musical; bodily-kinesthetic; interpersonal; and intrapersonal. Each person has a different mix of these skills
[3] Many poked fun at the Secretary for this formulation. It’s true that he was dodging a question about the lack of evidence of a direct link between the Iraqi government and terrorist organizations, and his reply is more than a little convoluted. But to his credit, it’s not easy to discuss epistemology using a 2x2 matrix in a couple of sentences in a live interview. He left out the entry “unknown knowns,” that is, things that you know without realizing that you do. An example from the War on Terror might be when an organization has important information tucked away in a regional office but top executives aren’t aware of it. I wouldn’t be surprised if this fourth quadrant had been discussed at the Pentagon.
We only argue about things that are uncertain; otherwise, there’s no point in having a debate. Argumentation not only allows participants to test their reasoning and persuade others, but can also lead to new insights. This is particularly true in complex, “wicked” problems where a question can only be grasped by attempting to fashion a solution.
If a discussion involves the question “But what do you mean by X?” it’s probably on the edge of intelligibility. Social debates thrive in this zone. For example, what does “life” mean in the phrase “Life begins at conception?” The abortion debate hinges on when organic matter becomes a human being. This is a very complex question where any answer raises question about the meaning of the term “human” (at least for non-partisans).
Or: what does “information overload” mean? There’s more information today, but are we more overloaded than our forebears? How would we know? The concept is so broad that we probably can’t measure information overload today, let alone estimate it for past generations. Adam Gopnik argues in “Bumping into Mr. Ravioli” [1] that no-time-to-meet-your-friends busyness is a very modern affliction. Samuel Pepys, a very busy man, never complains of busyness. Gopnik contends that until the middle of the nineteenth century, the defining bourgeois affliction was boredom, not frenzy. Perhaps they had information underload... regardless, both ennui and overload live under the sign of meaninglessness, and thus at the edge of intelligibility.
We oscillate between excitement and boredom because we crave both novelty and predictability. When we have predictability, we become bored and seek novelty; as soon as we’re stimulated, we become agitated and seek refuge in predictability. This experience is probably common to all animals since it’s a good heuristic for finding food and staying safe.
Likewise, we seek both perplexity and reassurance. When it swept the world, sudoku was a two-fer: a perplexing novelty. The daily news is another two-fer: a ritual reassurance that the world hasn’t changed, even as it changes. Derek Lowe points out that there are a number of news templates that are used over and over again, like How The Mighty Have Fallen, or Just Like You Thought. As we alternate between perplexity and reassurance, we skate on the edge of intelligibility. Journalists are very good at giving readers just as much information as they can deal with, and then a little more, adding a pinch of perplexity to the comfort food of understanding.
Donald Rumsfeld provided a multi-layered lesson in intelligibility during a Department of Defense news briefing on Feb. 12, 2002. Since his pronouncement is sometimes edited for clarity [3], here’s what the transcript says:
“Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend [sic] to be the difficult
ones.”
Rumsfeld’s model sums up the context in which decision makers operate. “Tops” like the Secretary of Defense not only inhabit a world of complexity and responsibility, as Barry Oshry would have it, but also live in the perpetual twilight between the known and unknown.
The cusp between competence and risk is another productive margin. Mihaly Csikszentmihalyi argues in Flow: The Psychology of Optimal Experience that we find most productive and rewarding states of being when a challenge tests but does not exceed our skill. For intellectual performances, the search for flow will take us to the edge of intelligibility.
The edge of intelligibility is at different places for different people. Someone who can grasp a theorem in trigonometry in a glance might struggle to make sense a situation on the football field; a good quarterback might have trouble reading the motives of people around a meeting table. Since there are so many cognitive competencies [2], there is always justification for anybody to feel those around them are ignoramuses, or to feel that they are out of their depth compared to the expert next to them.
The edge of intelligibility is a subjective question that involves personal expertise and communal standards. Peter Dear’s wonderful new book The Intelligibility of Nature: How Science Makes Sense of the World argues that Newton’s Principia was not considered to be valid natural philosophy by many leading scientists because it did not provide a mechanical explanation of how gravity worked. Getting the right answer wasn’t sufficient to qualify as science; it had to provide a meaningful explanation, too.
The question of intelligibility is related to my pursuit of hard intangibles, but I’m not yet sure exactly how. A problem must be recognizable as such to be tractable, and is thus intelligible to a certain extent. However, it’s hard because so much about it is difficult to grasp. Hopefully the pursuit of puzzlement will eventually lead to more clarity!
----------
[1] “Bumping into Mr. Ravioli” by Adam Gopnik. First published in the New Yorker, September 30, 2003. Reprinted in The Best American Essays 2003.
[2] See e.g. Howard Gardner’s work on Multiple Intelligences. He argues that there are seven distinct intelligences: linguistic; logical-mathematical; spatial; musical; bodily-kinesthetic; interpersonal; and intrapersonal. Each person has a different mix of these skills
[3] Many poked fun at the Secretary for this formulation. It’s true that he was dodging a question about the lack of evidence of a direct link between the Iraqi government and terrorist organizations, and his reply is more than a little convoluted. But to his credit, it’s not easy to discuss epistemology using a 2x2 matrix in a couple of sentences in a live interview. He left out the entry “unknown knowns,” that is, things that you know without realizing that you do. An example from the War on Terror might be when an organization has important information tucked away in a regional office but top executives aren’t aware of it. I wouldn’t be surprised if this fourth quadrant had been discussed at the Pentagon.
Subscribe to:
Posts (Atom)