Suze Woolf put me on to Grady Booch’s Handbook of Software Architecture, which aims to codify the architecture of a large collection of interesting software-intensive systems, presenting them in a manner that exposes their essential patterns, and that permits comparisons across domains and architectural styles.
Booch mentions in passing on the Welcome page that “abstraction is the primary way we as humans deal with complexity”. I don’t know if that’s true; it sounds plausible. It’s definitely true that software developers deal with complexity this way, creating a ladder of increasingly succinct languages that are ever further away from the nitty gritty of the machine. While there are huge benefits in productivity, there’s also a price to pay; as Scott Rosenberg puts it in Dreaming in Code, “It's not that [developers] wouldn't welcome taking another step up the abstraction ladder; but they fear that no matter how high they climb on that ladder, they will always have to run up and down it more than they'd like--and the taller it becomes, the longer the trip.”
The notion of “limits to abstraction” is another useful way to frame the hard intangibles problem.
These limits may be structural (abstraction may fail because of the properties of a problem, or the abstraction) or cognitive (it may fail because the thinker’s mind cannot process it). In The Law of Leaky Abstractions (2002), Joel Spolsky wrote (giving lots of great examples) “All non-trivial abstractions, to some degree, are leaky. Abstractions fail. Sometimes a little, sometimes a lot. There's leakage. Things go wrong. It happens all over the place when you have abstractions. . . . One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to.”
There’s more for me to do here, digging into the literature on abstraction. Kit Fine’s book, The Limits of Abstraction (2002) could be useful, though it’s very technical – but at least there have been lots of reviews.
"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Friday, August 31, 2007
Tuesday, August 28, 2007
The Non-conscious
An essay by Chris Frith in New Scientist (11 August 2007, subscribers only) on difficulties with the notion of free will contains a useful list of experiments and thought-provoking findings.
He reminds us of Benjamin Libet’s 1983 experiment indicating that decisions are made in the brain before our consciousness is aware of it:
Other results mentioned include Patrick Haggard’s findings that the act of acting strengthens belief in causation; Daniel Wegner’s work on how one can confuse agency when another person is involved; and work by various researchers on how people respond to free riders; and Dijksterhuis et al’s work on non-conscious decision making, which I discussed in Don’t think about it.
He reminds us of Benjamin Libet’s 1983 experiment indicating that decisions are made in the brain before our consciousness is aware of it:
“Using an electroencephalogram, Libet and his colleagues monitored their subjects' brains, telling them: "Lift your finger whenever you feel the urge to do so." This is about as near as we get to free will in the lab. It was already known that there is a detectable change in brain activity up to a second before you "spontaneously" lift your finger, but when did the urge occur in the mind? Amazingly, Libet found that his subjects' change in brain activity occurred 300 milliseconds before they reported the urge to lift their fingers. This implies that, by measuring your brain activity, I can predict when you are going to have an urge to act. Apparently this simple but freely chosen decision is determined by the preceding brain activity. It is our brains that cause our actions. Our minds just come along for the ride.”Frith recalls the Hering illusion, where a background of radiating lines makes superposed lines seem curved. Even though one knows “rationally” that the lines are straight, one sees them as curved. Frith uses this as an analogy for the illusion that we feel as if we are controlling our actions. To me, this illusion (and perhaps even more profoundly, the Hermann-grid illusion) points to the way different realities can coexist. There is no doubt that humans experience the Hering lines as curved, and that we see shadows in the crossings of the Hermann grid. Likewise, there is no doubt that many (most?) humans have an experience of the divine. The divine is an experiential reality, even if it mightn’t exist by some objective measures.
Other results mentioned include Patrick Haggard’s findings that the act of acting strengthens belief in causation; Daniel Wegner’s work on how one can confuse agency when another person is involved; and work by various researchers on how people respond to free riders; and Dijksterhuis et al’s work on non-conscious decision making, which I discussed in Don’t think about it.
Black Swans in the Economist
A recent Economist (vol 384, no. 8542, 18-24 August 2007) had two great pieces of evidence for Nassim Nicholas Taleb’s contentions in The Black Swan that we never predict the really important events, and that Wall Street is dangerously addicted to (Gaussian) models that do not correctly reflect the likelihood of very rare events.
From On top of everything else, not very good at its job, a review of a history of the CIA by Tim Weiner:
From On top of everything else, not very good at its job, a review of a history of the CIA by Tim Weiner:
“The CIA failed to warn the White House of the first Soviet atom bomb (1949), the Chinese invasion of South Korea (1950), anti-Soviet risings in East Germany (1953) and Hungary (1956), the dispatch of Soviet missiles to Cuba (1962), the Arab-Israeli war of 1967 and Saddam Hussein's invasion of Kuwait in 1990. It overplayed Soviet military capacities in the 1950s, then underplayed them before overplaying them again in the 1970s.”From The game is up, a survey of how the sub-prime lending crisis came about:
“Goldman Sachs admitted [that their investment models were useless] when it said that its funds had been hit by moves that its models suggested were 25 standard deviations away from normal. In terms of probability (where 1 is a certainty and 0 an impossibility), that translates into a likelihood of 0.000...0006, where there are 138 zeros before the six. That is silly.”
Saturday, August 25, 2007
Programs as spaces
Paul Graham's essay Holding a Program in One’s Head describes how a good programmer immersed in their code holds it in their mind: "[Mathematicians] try to understand a problem space well enough that they can walk around it the way you can walk around the memory of the house you grew up in. At its best programming is the same. You hold the whole program in your head, and you can manipulate it at will."
The article’s mainly concerned with the organizational consequences of needing to "load the program into your head” in order to do good work. But I want to focus on the spatial metaphor. Thinking through a program by walking through the spaces in your head is an image I've heard other programmers use, and it reminds me of the memory methods described by Frances Yates in The Art of Memory. (While Graham does make reference to writing and reading, I don't think this is aural memory; his references to visualization seem more fundamental.)
I wonder about the kind of cognitive access a programmer has to their program once it’s loaded. Descriptions of walking through a building imply that moment-by-moment the programmer is only dealing with a subset of the problem, although the whole thing is readily available in long-term memory. He’s thinking about the contents of a particular room and how it connects with the other rooms, not conceptualizing the entire house and all its relationships at the same instant. I imagine this is necessarily the case, since short-term memory is limited. If true, this imposes limitation on the topology of the program, since the connections between different parts are localized and factorizable – when you walk out of the bedroom you don’t immediately find yourself in the foyer. Consequently, problems that can’t be broken down (or haven’t been broken down) into pieces with local interactions of sufficiently limited scope to be contained in short term memory will not be soluble.
Graham also has a great insight on what makes programming special: "One of the defining qualities of organizations since there have been such a thing is to treat individuals as interchangeable parts. This works well for more parallelizable tasks, like fighting wars. For most of history a well-drilled army of professional soldiers could be counted on to beat an army of individual warriors, no matter how valorous. But having ideas is not very parallelizable. And that's what programs are: ideas." Not only are programming tasks not like fighting wars as Graham imagines them; they're not like manufacturing widgets either. The non-parallelizability of ideas implies their interconnections, and here we have the fundamental tension: ideas may be highly interlaced by their nature, but the nature of the brain limits the degree to which we can cope with their complexity.
The article’s mainly concerned with the organizational consequences of needing to "load the program into your head” in order to do good work. But I want to focus on the spatial metaphor. Thinking through a program by walking through the spaces in your head is an image I've heard other programmers use, and it reminds me of the memory methods described by Frances Yates in The Art of Memory. (While Graham does make reference to writing and reading, I don't think this is aural memory; his references to visualization seem more fundamental.)
I wonder about the kind of cognitive access a programmer has to their program once it’s loaded. Descriptions of walking through a building imply that moment-by-moment the programmer is only dealing with a subset of the problem, although the whole thing is readily available in long-term memory. He’s thinking about the contents of a particular room and how it connects with the other rooms, not conceptualizing the entire house and all its relationships at the same instant. I imagine this is necessarily the case, since short-term memory is limited. If true, this imposes limitation on the topology of the program, since the connections between different parts are localized and factorizable – when you walk out of the bedroom you don’t immediately find yourself in the foyer. Consequently, problems that can’t be broken down (or haven’t been broken down) into pieces with local interactions of sufficiently limited scope to be contained in short term memory will not be soluble.
Graham also has a great insight on what makes programming special: "One of the defining qualities of organizations since there have been such a thing is to treat individuals as interchangeable parts. This works well for more parallelizable tasks, like fighting wars. For most of history a well-drilled army of professional soldiers could be counted on to beat an army of individual warriors, no matter how valorous. But having ideas is not very parallelizable. And that's what programs are: ideas." Not only are programming tasks not like fighting wars as Graham imagines them; they're not like manufacturing widgets either. The non-parallelizability of ideas implies their interconnections, and here we have the fundamental tension: ideas may be highly interlaced by their nature, but the nature of the brain limits the degree to which we can cope with their complexity.
Monday, August 20, 2007
Don’t think about it
Dijksterhuis and colleagues at the University of Amsterdam found that the unconscious intuition is better than conscious cogitation for some complex problems. I’ve wondered for a while whether Halford & Co’s finding that humans can process at most four independent variables simultaneously would change if one biased the test towards subconscious thinking. The Dijksterhuis work suggests that it might increase the number of processable variables. (See below for references.)
Dijksterhuis et al. (2006) hypothesized that decisions that require evaluating many factors may be better done by the sub-conscious. In one experiment, they asked volunteers to choose a car based on four attributes. This was easy to do, though the choice was constructed to be pretty easy. When subjects were then asked to think through a dozen attributes, they did no better than chance though. However, when distracted so that thinking took place subconsciously, they did much better. Conclusion: conscious thinkers were better able to make the best choice among simple products, whereas unconscious thinkers were better able to make the best choice among complex products.
Halford defines the complexity of a cognitive process as the number of interacting variables that must be represented in parallel to implement that the most complex step in a process (see Halford et al. 1998 for a review). He argues that relational complexity is a serviceable metric for conceptual complexity. Halford et al. (2005) found that a structure defined on four independent variables is at the limit of human processing capacity. Participants were asked to interpret graphically displayed statistical interactions. Results showed a significant decline in accuracy and speed of solution from three-way to four-way interactions; performance on a five-way interaction was at a chance level.
The Amsterdam experiment wasn’t equivalent, because the variables weren’t independent – it seems the decision was a matter of counting the number of positive attributes in the case of car choice. (One car was characterized by 75% positive attributes, two by 50% positive attributes, and one by 25% positive attributes.) Dijksterhuis et al. define complexity as “the amount of information a choice involves;” more attributes therefore means higher complexity. I don’t know how to map between the Dijksterhuis and Halford complexity metrics.
Still, I’ve wondered about what might happen if subjects had to only seconds to guess at the graphical comparisons used in Halford et al. (1998), rather than finding the answer by deliberation. If they were given Right/Wrong feedback as they went, they might intuitively learn how to guess the answer. (I’m thinking of something like this simulation of the Monty Hall game.) If this were the case, it could undermine my claims about the innate limits to software innovation for large pieces of code (or projects in general) with large numbers of independent variables.
References
Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, and Rick B. van Baaren (2006)On Making the Right Choice: The Deliberation-Without-Attention Effect
Science 17 February 2006 311: 1005-1007 [DOI: 10.1126/science.1121629]
Graeme S. Halford, Rosemary Baker, Julie E. McCredden, John D. Bain (2005)
How Many Variables Can Humans Process? (experiment)
Psychological Science 16 (1), 70–76
G. S. Halford, W. H. Wilson, & S. Phillips, (1998)
Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology (review article)
Behavioral and Brain Sciences, 21, 803–831.
Dijksterhuis et al. (2006) hypothesized that decisions that require evaluating many factors may be better done by the sub-conscious. In one experiment, they asked volunteers to choose a car based on four attributes. This was easy to do, though the choice was constructed to be pretty easy. When subjects were then asked to think through a dozen attributes, they did no better than chance though. However, when distracted so that thinking took place subconsciously, they did much better. Conclusion: conscious thinkers were better able to make the best choice among simple products, whereas unconscious thinkers were better able to make the best choice among complex products.
Halford defines the complexity of a cognitive process as the number of interacting variables that must be represented in parallel to implement that the most complex step in a process (see Halford et al. 1998 for a review). He argues that relational complexity is a serviceable metric for conceptual complexity. Halford et al. (2005) found that a structure defined on four independent variables is at the limit of human processing capacity. Participants were asked to interpret graphically displayed statistical interactions. Results showed a significant decline in accuracy and speed of solution from three-way to four-way interactions; performance on a five-way interaction was at a chance level.
The Amsterdam experiment wasn’t equivalent, because the variables weren’t independent – it seems the decision was a matter of counting the number of positive attributes in the case of car choice. (One car was characterized by 75% positive attributes, two by 50% positive attributes, and one by 25% positive attributes.) Dijksterhuis et al. define complexity as “the amount of information a choice involves;” more attributes therefore means higher complexity. I don’t know how to map between the Dijksterhuis and Halford complexity metrics.
Still, I’ve wondered about what might happen if subjects had to only seconds to guess at the graphical comparisons used in Halford et al. (1998), rather than finding the answer by deliberation. If they were given Right/Wrong feedback as they went, they might intuitively learn how to guess the answer. (I’m thinking of something like this simulation of the Monty Hall game.) If this were the case, it could undermine my claims about the innate limits to software innovation for large pieces of code (or projects in general) with large numbers of independent variables.
References
Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, and Rick B. van Baaren (2006)On Making the Right Choice: The Deliberation-Without-Attention Effect
Science 17 February 2006 311: 1005-1007 [DOI: 10.1126/science.1121629]
Graeme S. Halford, Rosemary Baker, Julie E. McCredden, John D. Bain (2005)
How Many Variables Can Humans Process? (experiment)
Psychological Science 16 (1), 70–76
G. S. Halford, W. H. Wilson, & S. Phillips, (1998)
Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology (review article)
Behavioral and Brain Sciences, 21, 803–831.
Thursday, August 16, 2007
Music: back to tangibles
The Economist sums up a story on the record labels’ new business model with an Edgar Bronfman quote: “The music industry is growing. The record industry is not growing.”
It seems the labels have decided that they need a cut of more than just a band’s CD sales; their new contracts include live music, merchandise, and endorsement deals.
Just as the old instincts for relationships and reality have driven the pet industry to generate more revenue than media (see my post Animal Instincts) tangibles are reasserting themselves in the music industry. The Economist, citing the Music Managers Forum trade group, reports that seven years ago, record-label musicians derived two-thirds of their income from pre-recorded music, with the other one-third coming from concert tours, merchandise and endorsements. Those proportions have been now been reversed. For example, concert-ticket sales in North America alone increased from $1.7 billion in 2000 to over $3.1 billion last year, according to Pollstar, a trade magazine.
It seems the labels have decided that they need a cut of more than just a band’s CD sales; their new contracts include live music, merchandise, and endorsement deals.
Just as the old instincts for relationships and reality have driven the pet industry to generate more revenue than media (see my post Animal Instincts) tangibles are reasserting themselves in the music industry. The Economist, citing the Music Managers Forum trade group, reports that seven years ago, record-label musicians derived two-thirds of their income from pre-recorded music, with the other one-third coming from concert tours, merchandise and endorsements. Those proportions have been now been reversed. For example, concert-ticket sales in North America alone increased from $1.7 billion in 2000 to over $3.1 billion last year, according to Pollstar, a trade magazine.
Sunday, August 12, 2007
Animal Instincts
American spend $41 billion per year on their pets according to a feature article in BusinessWeek. That’s about $400 per household, and more than the GDP of all but 64 countries in the world.
It’s good to know that the old hard-wired priorities – relationships, even with animals, and schtuff – still trump the new intangibles. According to BW, the yearly cost of buying, feeding, and caring for pets is more than the combined sum of what Americans spend on the movies ($10.8 billion), playing video games ($11.6 billion), and listening to recorded music ($10.6 billion).
Pet care is the second-fastest growing retail sector after consumer electronics. But the intangible economy is unavoidable even here: services like ‘pet hotels’ (kennels, to you and me), grooming, training, and in-store hospitals, have helped PetSmart expand its service business from essentially nothing in 2000 to $450 million, or 10% of overall sales, this year.
(It seems that pets are now more popular as companions for empty-nesters, single professionals and DINKYs, than as kids’ sidekicks. With this kind of infantilization, will it be long before more grown-ups start admitting to still sleeping with teddy bears?)
It’s good to know that the old hard-wired priorities – relationships, even with animals, and schtuff – still trump the new intangibles. According to BW, the yearly cost of buying, feeding, and caring for pets is more than the combined sum of what Americans spend on the movies ($10.8 billion), playing video games ($11.6 billion), and listening to recorded music ($10.6 billion).
Pet care is the second-fastest growing retail sector after consumer electronics. But the intangible economy is unavoidable even here: services like ‘pet hotels’ (kennels, to you and me), grooming, training, and in-store hospitals, have helped PetSmart expand its service business from essentially nothing in 2000 to $450 million, or 10% of overall sales, this year.
(It seems that pets are now more popular as companions for empty-nesters, single professionals and DINKYs, than as kids’ sidekicks. With this kind of infantilization, will it be long before more grown-ups start admitting to still sleeping with teddy bears?)
Saturday, August 11, 2007
The mortgage mess as a cognitive problem
The sub-prime mortgage debacle is a problem of cognitive complexity. A lack of understanding of the risks entailed by deeply nested loan relationships is leading to a lack of trust in the markets, and this uncertainty is leading to a sell-off. More transparency will help – but has its limits.
A story on NPR quotes Lars Christensen, an economist at Danske Bank, as saying that there is no trust in market because of the unknown complexity of the transactions involved. (Adam Davidson, “U.S. Mortgage Market Woes Spread to Europe,” All Things Considered, Aug 10th, 2007; more was broadcast than seems to be in the online version.)
This is a ‘hard intangibles’ problem: intricate chains and bundles of debt arise because there’s no physical limit on the number of times these abstractions can be recomposed and layered, with banks lending to other banks based bundles of bundled loans as collateral. When questions arise about the solvency of one of the root borrowers, it’s in large part because there’s no transparency into what they’re holding. According to the Economist (“Crunch time” Aug 9th 2007), complex, off-balance sheet financial instruments were the catalyst for the market sell-off. Phrases like “investors have begun to worry about where else such problems are likely to crop up” suggest that lack of understanding is driving uncertainty. The entire market is frozen in place, like soldiers in a mine field: one bomb has gone off, but no-one knows where the next is buried.
One of the drivers of the problem, according to the Financial Times (Paul J Davies, “Who is next to catch subprime flu?” Aug 9th 2007), is that low interest rates have propelled investors into riskier and more complex securities that pay a higher yield. “Complexity” is a way of saying that few if any analysts truly understand the inter-relationships among these instruments. The market is facilitated by the use of sophisticated models (i.e. computer programs) that predict the probabilities of default among borrowers, given the convoluted structure of asset-backed bonds. As the crisis has evolved, banks have come to realize – again, suggesting that this was not immediately obvious – that they’re exposed to all forms of credit markets, to more forms of credit risk than they thought.
This suggests a policy response to a world of hard intangibles: enforced transparency. The US stock market, the most advanced in the world, has of necessity evolved to be more transparent in terms of disclosure requirements on company operations than its imitators. According the FT, the Bundesbank is telling all the German institutions to put everything related to sub-prime problems on the table – indicating that increased visibility for the market will improve matters.
It’s striking how little the banks seem to know. BusinessWeek quotes an economist at CalTech as recommending that the Federal Reserve insists firms rapidly evaluate their portfolios to determine exactly how much of the toxic, risky investments they hold – by implication, they don’t know. (Ben Steverman, “Markets: Keeping the Bears at Bay” Aug 10th 2007.) The situation summed up well here:
“The problem here is that the financial industry has created a raft of new, so-called innovative debt products that are hard, even in the best of times, to place an accurate value on. "You don't have the transparency that exists with exchange products," the second-by-second adjustment in a stock price, for example, says Brad Bailey, a senior analyst at the Aite Group. The products are so complex that many investors might have bought them without realizing how risky they are, he says.”
Transparency may be a useful solvent for governance problems in all complex situations. For example, in an unpublished draft paper on network neutrality, analysts at RAND Europe recommend that access to content on next-generation networks be primarily enforced via reporting requirements on network operators, e.g. requiring service providers to inform consumers about the choices they are making when they sign up for a service – one of the keystones of the Internet Freedoms advocated by the FCC under Michael Powell. This may be one of the only ways to provide some degree of management of the modularized value mesh of today’s communications services.
However, transparency as a solution is limited by the degree to which humans can make sense of the information that is made available. If a structure is more complex than we can grasp, then there are limits to the benefits of knowing its details. A hesitate to draw the corollary: that limits should be imposed on the complexity of the intangible structures we create.
A story on NPR quotes Lars Christensen, an economist at Danske Bank, as saying that there is no trust in market because of the unknown complexity of the transactions involved. (Adam Davidson, “U.S. Mortgage Market Woes Spread to Europe,” All Things Considered, Aug 10th, 2007; more was broadcast than seems to be in the online version.)
This is a ‘hard intangibles’ problem: intricate chains and bundles of debt arise because there’s no physical limit on the number of times these abstractions can be recomposed and layered, with banks lending to other banks based bundles of bundled loans as collateral. When questions arise about the solvency of one of the root borrowers, it’s in large part because there’s no transparency into what they’re holding. According to the Economist (“Crunch time” Aug 9th 2007), complex, off-balance sheet financial instruments were the catalyst for the market sell-off. Phrases like “investors have begun to worry about where else such problems are likely to crop up” suggest that lack of understanding is driving uncertainty. The entire market is frozen in place, like soldiers in a mine field: one bomb has gone off, but no-one knows where the next is buried.
One of the drivers of the problem, according to the Financial Times (Paul J Davies, “Who is next to catch subprime flu?” Aug 9th 2007), is that low interest rates have propelled investors into riskier and more complex securities that pay a higher yield. “Complexity” is a way of saying that few if any analysts truly understand the inter-relationships among these instruments. The market is facilitated by the use of sophisticated models (i.e. computer programs) that predict the probabilities of default among borrowers, given the convoluted structure of asset-backed bonds. As the crisis has evolved, banks have come to realize – again, suggesting that this was not immediately obvious – that they’re exposed to all forms of credit markets, to more forms of credit risk than they thought.
This suggests a policy response to a world of hard intangibles: enforced transparency. The US stock market, the most advanced in the world, has of necessity evolved to be more transparent in terms of disclosure requirements on company operations than its imitators. According the FT, the Bundesbank is telling all the German institutions to put everything related to sub-prime problems on the table – indicating that increased visibility for the market will improve matters.
It’s striking how little the banks seem to know. BusinessWeek quotes an economist at CalTech as recommending that the Federal Reserve insists firms rapidly evaluate their portfolios to determine exactly how much of the toxic, risky investments they hold – by implication, they don’t know. (Ben Steverman, “Markets: Keeping the Bears at Bay” Aug 10th 2007.) The situation summed up well here:
“The problem here is that the financial industry has created a raft of new, so-called innovative debt products that are hard, even in the best of times, to place an accurate value on. "You don't have the transparency that exists with exchange products," the second-by-second adjustment in a stock price, for example, says Brad Bailey, a senior analyst at the Aite Group. The products are so complex that many investors might have bought them without realizing how risky they are, he says.”
Transparency may be a useful solvent for governance problems in all complex situations. For example, in an unpublished draft paper on network neutrality, analysts at RAND Europe recommend that access to content on next-generation networks be primarily enforced via reporting requirements on network operators, e.g. requiring service providers to inform consumers about the choices they are making when they sign up for a service – one of the keystones of the Internet Freedoms advocated by the FCC under Michael Powell. This may be one of the only ways to provide some degree of management of the modularized value mesh of today’s communications services.
However, transparency as a solution is limited by the degree to which humans can make sense of the information that is made available. If a structure is more complex than we can grasp, then there are limits to the benefits of knowing its details. A hesitate to draw the corollary: that limits should be imposed on the complexity of the intangible structures we create.
Friday, August 10, 2007
Wang Wei
Wang Wei is one of China’s greatest poets (and painters), the creator of small, evocative landscape poems that are steeped in tranquility and sadness. More than that, though: while the poems are about solitude rooted in a serious practice of ch’an meditation, Wang worked diligently as a senior civil servant all his life.
I found his work via review of Jane Hirshfield’s wonderful collection After. David Hinton’s selection of Wang Wei poems is beautifully wrought. The book is carefully designed, and the Introduction and Notes are very helpful.
Wang’s poetry gives no hint of the daily bureaucracy that he must have dealt with. It’s anchored in his hermitage in the Whole-South Mountain, which was a few hours from the capital city where he worked. Wang was born in the early Tang dynasty to one of the leading families. He had a successful civil service career, passing the entrance exam at the young age of 21 and eventually serving as chancellor. His wife died when he was 29, at which point he established a monastery on his estate. He died at the age of 60.
Deer Park is one of his most famous poems. Here’s Hinton’s translation:
Wang’s work shows that one can make spiritual progress while also participating in society. One does not have to give up the daily life and become a monk, though it’s surely important that one’s mundane activities make a contribution to the good of society.
I found his work via review of Jane Hirshfield’s wonderful collection After. David Hinton’s selection of Wang Wei poems is beautifully wrought. The book is carefully designed, and the Introduction and Notes are very helpful.
Wang’s poetry gives no hint of the daily bureaucracy that he must have dealt with. It’s anchored in his hermitage in the Whole-South Mountain, which was a few hours from the capital city where he worked. Wang was born in the early Tang dynasty to one of the leading families. He had a successful civil service career, passing the entrance exam at the young age of 21 and eventually serving as chancellor. His wife died when he was 29, at which point he established a monastery on his estate. He died at the age of 60.
Deer Park is one of his most famous poems. Here’s Hinton’s translation:
No one seen. Among empty mountains,(For more translations, see here and here.)
hints of drifting voice, faint, no more.
Entering these deep woods, late sunlight
flares on green moss again, and rises.
Wang’s work shows that one can make spiritual progress while also participating in society. One does not have to give up the daily life and become a monk, though it’s surely important that one’s mundane activities make a contribution to the good of society.
Thursday, August 09, 2007
Factoid: 19 million programmers by 2010
According to Evans Data Corp, the global developer population will approach 19 million in 2010. (I found this via ZDNet's ITFacts blog; the EDC site requires registration to even see the press release.) That's quite a big number - the total population of Australia, for example.
Programming will not be a marginal activity, and any fundamental cognitive constraints on our ability to develop increasingly complex problems will be impossible to avoid.
A lot of the growth will come from new countries bringing programmers online: EDC forecasts that the developer population in APAC will grow by 83% increase from 2006 to 2010, compared to just a 15% increase in North America for the same period. This will keep the skill level high, since only very talented people will enter the population, rather than expanding the percentage of the programming population - and thus reducing average skill - in a given country.
Therefore, the qualititative problems of programming won't change much in the next 5-10 years. However, beyond that we may also face the issue of reducing innate skill levels of programmers.
Programming will not be a marginal activity, and any fundamental cognitive constraints on our ability to develop increasingly complex problems will be impossible to avoid.
A lot of the growth will come from new countries bringing programmers online: EDC forecasts that the developer population in APAC will grow by 83% increase from 2006 to 2010, compared to just a 15% increase in North America for the same period. This will keep the skill level high, since only very talented people will enter the population, rather than expanding the percentage of the programming population - and thus reducing average skill - in a given country.
Therefore, the qualititative problems of programming won't change much in the next 5-10 years. However, beyond that we may also face the issue of reducing innate skill levels of programmers.
Subscribe to:
Posts (Atom)