Saturday, April 28, 2007

Mus Gnarus

I made a big deal about the limits to our knowledge of what we don’t know in Incognita Incognita. Reality check: even rats understand the limits of their knowledge, so I shouldn’t get too carried away.

In The Rodent Who Knew Too Much in ScienceNOW (8 Mar 2007, subscription required), Gisela Telis reports on a study that tested the self-knowledge of rats. The experimenters trained rats to understand that they could get a big food reward (six pellets) if they correctly distinguished a long sound from a short one. They could boycott the test if they wanted to, though, and go for a smaller but guaranteed reward (three pellets). As the sounds became harder to distinguish, the rats would opt out and go for the certain, though less generous, reward.

The ability to gauge one’s own knowledge is known as metacognition. We know that humans can do it, and it’s been demonstrated in monkeys and dolphins; this is the first time the effect has been shown in smaller-brained animals.

Metacognition, or “thinking about thinking,” is a strategic activity. It allows us to reflect on a (tactical) cognitive lack, such as having a blind spot for taking immediate steps to remember the name of someone you’re introduced to. (“A pleasure to meet you, John. So tell me, John, did you enjoy the lecture? You know, I agree with that assessment, John.”) Metacognitive strategies can be learned – which gives me hope that any conclusions I might draw about Hard Intangibles will lead to more effective thinking.

P.S. Latin doesn’t seem to distinguish between "rat" and "mouse" (mus). Gnarus means "knowing" or "expert."

Friday, April 27, 2007

Defining the Internet

A discussion started on slashdot last night about a succinct layman’s definition of the Internet.

Most of the examples were technical, and related to computers connected in some way. There were some metaphors: highways, trains, telephones, mail. There were a few references its social aspects. Anthropomorphism was pervasive: computers talking to each other, sending messages, sharing information with each other.

Most of the discussion was about what it was, with some comments about how it works, and occasional references to its social function.

Here’s a précis of the definitions given so far:
  • collection of ideas
  • bunch of connected computers
  • information as trains running on tracks
  • general purpose communication system
  • means for computers to connect to each other and share information
  • everybody already knows what it is
  • computers talking to other computers over cables
  • roadway, highway
  • telephone system with computers calling computers
  • global public computer network
  • mail system
  • physical: computers sending messages; social: virtual community; functional: way to use computers to send messages; technical: computers using protocols
  • agreement (protocol) about how to have networks talk to each other
The best paper I’ve seen on this topic is Susan Crawford’s “Internet Think,” which contrasts the very different ways that “Engineers,” “Telcos,” and “Netheads” define the Internet. Most of the slashdot discussion would fall in the Engineers category.

The metaphors used for the Internet are so stable they’re stale . . . Perhaps the clean slate movement will stir up our thinking. How about the Internet as a brain (back to the Fifties!), or an ecosystem, or a society? The asymmetry of conceptual metaphors is perceptible in the last one: it’s more common to think about society using the Internet as a model (cf. Manuel Castells) than to model the Internet by thinking about society.

Thursday, April 26, 2007

Ducking hard questions: Objective vs. Subjective

In the conclusions to his 1974 paper “Structured Programming with go to Statements,” (Computing Surveys, Vol. 6, No. 4) Donald Knuth observes:

One thing we haven’t spelled out clearly, however, is what makes some go to’s bad and others acceptable. The reason is that we’ve really been directing our attention to the wrong issue, to the objective question of go to elimination instead of the important subjective question of program structure. In the words of John Brown [Knuth citation: “In memoriam . . . .”, unpublished note, January 1974], “The act of focusing our mightiest intellectual resources on the elusive goal of go to-less programs has helped us get our minds off all those really tough and possibly unresolvable problems and issues with which today’s professional programmer would other have to grapple.”

This is a useful and concrete reminder that a fixating on objective, answerable questions can miss the point. There is a certain delight in framing an objective question: it’s elegant, precise, and one can tell when it’s been answered. Some truly important questions, though, don’t lend themselves to objective formulations. This may be because they pertain to complex concepts which have so many interlocking variables that they be reduced to an intelligible logical form, and/or because they refer to notions that are ambiguous or contested. (I suspect that these two conditions, non-linear complexity and ambiguity, are related through our inability to fit the whole of a big question into a single brain.)

Monday, April 16, 2007

Incognita Incognita

It’s hard to think about what we don’t know. If we don’t know something, there’s no “thing” for our consciousness to attend to. One can imagine the unknown as the inverse of what one does know, but that’s just the known combined with the “not” operator, rather than the unknown itself. Most commonly, we tame the unknown with a name. The old mapmakers marked mysterious places as terra incognita, today’s cosmologists explain unexpected galactic dynamics by invoking dark matter, and the religious use the word God.

And of course there’s Donald Rumsfeld, he of the unknown unknown. I’m thinking here of a third category beyond his “known unknown” and “unknown unknown”: the unknowable unknown.

Even though it’s easy enough to think about not knowing, as I’m doing now, it’s not something I do very often. I seldom look at the wall of a lecture theater and realize that I don’t know what’s behind it. My thinking stops at the wall, and bounces back into the room that I can perceive.

Dogs are largely oblivious to human conversation. They don’t follow the to and fro of conversation. They are aware of the sound and some if its import, but they don’t know its meaning. In a sense, it doesn’t exist for them. In the same way, I’m ignorant of much going on inside me and around me, and I’m ignorant of the fact that I’m ignorant.

Things I know sometimes feel like places. As I learn more about a subject, I can begin to assemble the rooms representing topics into a building. But if I don’t know something (statistics, say) it’s not as if it’s the unexplored south wing of a mansion. There is no south wing. I have no sense of its shape. Something once known but now forgotten (like Green functions, in my case) are ghostly ruins remembered from a dream; there are only wisps and fragments.

We make up stuff to hide the fact that we don’t know. Helen Phillips describes in New Scientist (“Mind fiction: Why your brain tells tall tales,” 7 October 2006) how people make up stories when the reasons for their action are not available to conscious introspection:
[Timothy Wilson and Richard Nisbett] laid out a display of four identical items of clothing and asked people to pick which they thought was the best quality. It is known that people tend to subconsciously prefer the rightmost object in a sequence if given no other choice criteria, and sure enough about four out of five participants did favour the garment on the right. Yet when asked why they made the choice they did, nobody gave position as a reason. It was always about the fineness of the weave, richer colour or superior texture. This suggests that while we may make our decisions subconsciously, we rationalise them in our consciousness, and the way we do so may be pure fiction, or confabulation.

Note that people didn’t say, “I don’t know.” This is an important result for the study of hard intangibles. We are usually not aware that we have a limitation. Sometimes cannot even believe that we’re limited. Here’s another excerpt from the New Scientist story:
It is surprisingly common for stroke patients with paralysed limbs or even blindness to deny they have anything wrong with them, even if only for a couple of days after the event. They often make up elaborate tales to explain away their problems. One of Hirstein's patients, for example, had a paralysed arm, but believed it was normal, telling him that the dead arm lying in the bed beside her was not in fact her own. When he pointed out her wedding ring, she said with horror that someone had taken it. When asked to prove her arm was fine, by moving it, she made up an excuse about her arthritis being painful. It seems amazing that she could believe such an impossible story. Yet when Vilayanur Ramachandran of the University of California, San Diego, offered cash to patients with this kind of delusion, promising higher rewards for tasks they couldn't possibly do - such as clapping or changing a light bulb - and lower rewards for tasks they ould, they would always attempt the high pay-off task, as if they genuinely had no idea they would fail.

If we can observe the limitation in others, we can at least study it, if not experience it ourselves. However, it will be hard to teach others – and ourselves – to behave differently if, in our bones, we still cannot conceive of our lack.

Tuesday, April 10, 2007

Let’s hope we’re not rational about climate change

Global warming is a classic collective action dilemma.

A solution to global warming is a collective good and will be undersupplied, as Mancur Olson pointed out back in 1965.

Therefore, if Olson’s premises and argument are valid, we’re dooooooomed.

However: his argument supposed a rational economic agent who will wait for others to act, since his contribution is so small that on its own it won’t make a difference, and it’s absence won’t be noticed.

Only if humans don’t act as selfish rational agents will we avoid a climate catastrophe.

Fortunately, behavioral economics etc. suggests that we have bounded rationality, and even better, psychology and evolutionary biology suggests that non-rational altruism is hard wired.

Maybe there’s hope.

Tuesday, April 03, 2007

Algorithmic trading changes markets (maybe)

Kyril Faenov alerted me that electronic exchanges feeds back on themselves in unprecedented ways due to automated (or algorithmic) trading. Traders observe the market, and imagine a way to make money; their quants then write software to execute this trading strategy automatically. Running this code creates new, fast and extensive linkages between market processes.

Robin Sharpe provides an excellent introduction to algorithmic trading in Automated Trading and the New Markets. For more information, see John Bates in Dr. Dobbs, and wikipedia.

For example, trader A (or their software) notices a periodic spike in the price of equity X; trader B may be trying to buy a very large position of X in small portions so as not to push up the price too much. Trader A (or the software) buys stock X just moments before each predicted spike, selling it to trader B at a higher price once B enters the market. Trader B (or their software) notices the run-up, and changes their buying rhythm to disrupt trader A. And on it goes. . .

Such market interactions aren’t new, but software can execute the trades faster than humans can respond to them. Rather than duels between traders in real time, it becomes a duel between traders’ models of how the market functions. These are contests between world theories, where the theories themselves constitute the world. Kyril calls this the “reflectivity” of the market.

Douglas Hofstadter introduced the term “strange loop” in Gödel, Escher, Bach to describe a series of steps through a hierarchical system which take one back to the beginning. His new book I Am a Strange Loop uses this concept to explain self-awareness. In a New Scientist interview, Hofstadter says the brain, and the self, is like a smile because it’s a process rather than a thing. (Extending the metaphor: Software is to hardware as a smile is to a body.)

A market is observable through its behavior, that is, how it responds to stimuli. When the responses happen faster than humans can follow, and involve the integration of more variables than humans can handle on their own, the market is less a social interaction among people than an environment in which people act. A market is neither a place, nor a group of traders, nor the sequence of trades, nor a reflection of an outside commercial reality; it is the self-perpetuating process that involves all of them. The system’s behavior becomes a subconscious expression of the cumulative conscious plans of many people – subconscious because the mechanism is not directly available to human introspection.

Markets are examples of distributed cognition, that is, cognition which occurs in an ensemble of people and tools, rather than in a single brain; see Giere (2002, PDF) for a good survey. What’s striking about algorithmic trading is that the amount of cognition occurring outside human brains is growing rapidly.

Robin Sharp (ibid) points out that trading is increasingly hands-off, since humans can’t cope with the reaction times required.

“Whilst the theory behind program trading is fairly simple, the software reality is that program trading operates at a different time-scale to even the fastest human trading. . . It’s worth understanding that the brain brings experience and subjectivity to the table and software brings speed and objectivity to the table. For most types of trading experience beats speed but there is a lot of noise in the market in the sub-second region where the brain simply can’t compete with a computer. . . At the moment the volume of trading at these sub-second time scales is not be great (less than 5%) and is held back because the coarse granularity of ticks and price data has been designed for human interaction. However this will change.”
He also notes that algorithms can exploit the capacity limitations of human traders: “A large number of trades are difficult for traders to juggle in their heads. When such large events occur is small time frames computers can predict irrational behaviour of traders, again for a profit.”

The kinds of problems that traders face are not only analytical or cognitive; they’re also social. Sharp notes the organizational impediments to certain kinds of trades: “One of the reasons traders don’t trade cross market is that they cannot price the instruments fast enough, and – again – is that traditional corporate management structures and regulatory structures impede cross-market trading.”

Sharp tabulates the latencies in a trading cycle. The computer processing is 200 milliseconds, quick compared to 1,250 milliseconds for human perception, evaluation and response. The network latency is about 400 milliseconds. It’s worth noting that these network applications reinforce old geographic patterns rather than abolishing distance. Kyril tells me that the NYSE is doing a good business selling rack space at the Exchange to big trading houses; because every millisecond counts, their computers need to be on the same LAN as the Exchange or else they’ll lose to faster arbitrageurs.

I’m struggling with the question of whether automated trading leads to a qualitative change in the markets, rather than simply a quantitative one. While algorithms that respond to changes in the market begin to constitute the market, this kind of loop applies to traditional human-only trading, too.

Sure, the feedback loop is much faster with automated trading. But is it a difference in degree, or a difference in kind? It’s different from the human perspective; stuff now happens too quickly to follow consciously, and the role of humans changes. However, it might simply be a change in time scale, not a change in process.

The increasing complexity of the market may be even more important. Arbitrage can link markets, which generates more correlated variables than traders can juggle in their heads. An automated trading strategy could link current and futures markets, on different exchanges (New York and London), for different instruments (equities and foreign exchange), and different data types (Reuters news feed, GOOG and MSFT stock prices, S&P500 index, the 15 minute volume weighted price of GOOG). As Robin Sharp points out: “Eventually arbitrage will force separate markets to revaluate their relationships. It only takes one successful arbitrage engine to forever link two previously unrelated markets. Anybody in the business will know how deeply this will be felt.”

Perhaps integrating more streams of information in more complex and rapid ways creates a new kind of market causality. When humans were trading with each other, it was a social process. Now, from the human perspective, it’s more like experimenting on the world than dealing with people. “Hard intangibles” come into play because this new world is not the one humans evolved in. Humans still set the goals and strategies, but the parameters of this world interact in unexpected ways. And genetic algorithms will lead to algorithmic trades that are profoundly alien to human intuition. Trading is another activity, like large software projects, where the abstractions we’ve created are beginning to outstrip our ability to understand them.


Ronald Giere, “Scientific cognition as distributed cognition,” in The Cognitive Basis of Science, Cunningham, Stich & Siegal (eds.), Cambridge University Press, 2002

Sunday, April 01, 2007

Is Dampé Dead?

Richard “Dampé” Denton is (was?) the 15 year old writer of Ocarina of Time 2D, a much-anticipated Legend of Zelda title. He was supposed to have died on the 23rd March (or was it the 21st?), but Squidnews reveals why it reports of his death are a hoax. The reigning theory is that Dampé wanted to escape from the pressure of anxious fans by arranging his supposed demise.

I was struck by how easy it was for the writer to do his fact checking, searching and Fairfax classifieds for reports of death under the name of Denton, and most interestingly, looking up the Victoria’s Transport Accident Commission road toll statistics tool, searchable by date, location, victim and injury type.

This is a great example of how findable one, or one’s lies, are is on the web, and the pressures of constant visibility.