Friday, March 23, 2007
The Times describes how Japanese puzzle solvers tinker and improve puzzles, presumably to make them just hard enough to be a challenge. If puzzles are too easy, it’s boring; if they’re too hard, there’s no point attempting them. We humans can clearly create puzzles we can’t solve ourselves. However, some of these “puzzles” aren’t games we can ignore if they’re too hard, or simplify at will. They’re critical artifacts and infrastructure like software, financial systems, and webs of trade.
Thursday, March 22, 2007
Jeffrey Adams, the project's leader and a math professor at the University of Maryland, is reported as saying, "To say what precisely it is is something even many mathematicians can't understand." The scale of E8 boggles the mind: all the information about E8 and its representations is 60 gigabytes in size.
For more on E8, see http://aimath.org/E8/. Curiously, the representation on that page is a semi-lattice - cf. my post on Christopher Alexander's application of this mathematical concept to the complexity of cities.
The language used to describe this topic is strikingly physical: "The classical groups A1, A2, A3, ... B1, B2, B3, ... C1, C2, C3, ... and D1, D2, D3, ... rise like gentle rolling hills towards the horizon. Jutting out of this mathematical landscape are the jagged peaks of the exceptional groups G2, F4, E6, E7 and, towering above them all, E8. E8 is an extraordinarily complicated group: it is the symmetries of a particular 57-dimensional object, and E8 itself is 248-dimensional!"
[Thanks to Scott Forbes for alerting me to this result.]
Wednesday, March 07, 2007
Pam: I really like the idea of exploring cognitive hardness and cognitive capacity. It’s fascinating. I do wonder if the challenge of parallel computing is cognitive capacity per se or something more interesting. Which is probably your point. Doh. I wonder what else/something more is going on. Certainly since we probably haven’t thought/tested/experimented enough about cognitive capacity, then we haven’t yet thought about ways to compensate for/extend it.
Pierre: I haven’t gone much beyond the cognitive capacity yet. At this point I’m trying to narrow the focus, but I’m sure you’re onto something. The “something” may be related to cognitive limitations but in a more general way, e.g. cultural constructs.
Pam: My gut says that the field of computer science is severely limited by the people who practice it, and their brains – the mindsets, the particular kind of intelligence that they have, their personality traits. Do their brains work differently, I wonder, than visual artists or doctors or lawyers or anyone else?
Pierre: Howard Gardner did very influential work decades ago about learning styles that’s relevant here. People’s brains evidently do work somewhat differently. One of the tricky issues in my project is separating individual variation from generic Home Sapiens constraints. There’s a big difference between Einstein and the village idiot, and between Einstein and Picasso, but a bigger difference between any of them and a gibbon.
Pam: Do we have different types of consciousness? Does that bring anything to bear to the problem? What is cognitive capacity, really? Is it absolute? Is it really the number of connections that we can hold in our heads at one time, or something else? Is it a spinning plate problem, or something else?
Pierre: I’m sure there are different types of consciousness, because consciousness is compound. The current scientific consensus seems to be that consciousness is constructed concurrently in many brain areas, and in fact that most of the interesting thinking we do is pre-conscious. When it comes to “capacity,” that’s just a (collection of) metrics, e.g. the number of concurrent independent variables one can handle. Back to Gardner’s multiple intelligences: he showed pretty conclusively that IQ tests just measured linguistic and mathematical facility, and that there were at least five other important skills that IQ tests didn’t track.
Pam: I also wonder what the interplay of cognitive capacity and consciousness might be. Should we think of one as a subset of the other?
Pierre: You are asking the big questions, aren’t you? Speaking from almost complete ignorance, I’d venture that they’re partially overlapping. Animals that have less cognitive capacity than we do are also conscious, but not all capacity is in consciousness.
Pam: Has computer science been limited by the models it’s used to develop software and computer science approaches and concepts? For example, would a more consilient approach make big breakthroughs and paradigm shifts? What if dev teams/architects/etc included a broader range of types of thinkers?
Pierre: More diverse teams may lead to more breakthroughs, but will also have higher coordination overheads. Part of the trouble with “wicked problems” is the social complexity engendered by multiple stakeholders who can’t agree on the problem, let alone the solution.
Pam: If “intangibility and flexibility of software presents a qualitatively different cognitive challenge to most (all?) previous kinds of engineering,” then I guess we’ll need a different kind of engineer, won’t we? Maybe we should stop thinking about it as engineering at all.
Pierre: Perhaps, but not necessarily. We may just need to train them differently, give them specific tools, and manage our expectations.
Pam: Do the brains of various language speakers work differently from each other? Do the brains of Indian, Chinese, or any other nationality of developer work differently than European or American ones? Female vs. male developers? Do the brains of deeply consilient thinkers work differently? Are some cultures more cognitively hard than others? Does each culture have its own flavor of cognitive hardness?
Pierre: I very much doubt this. Sure, there are cultural differences in math and science performance (much greater than the gender differences which are de minimis, it turns out), but my assumption is that humans don’t vary that much. That said, cultures may have found different work-arounds, and we can surely learn a lot from looking across cultures, just as we can learn (as you suggested above) by looking at people with different aptitudes.
Pam: My bias is that if we leave resolving why parallel computing is hard to those who live naturally in that world it will take longer and be less satisfactory.
Pierre: Yes indeed. Iif we can answer “why is programming hard?” we’ll also be able to cast some light on questions like “why is IPR hard?” and “why is international policy hard?”
Pam: Your proposed threads nag at me somehow. They sound logical, but incomplete. Ah, maybe because they’re all couched in terms of limitations and difficulties, and not the opposite. Trying on both approaches might give more interesting and, dare I say, valuable results.
Friday, March 02, 2007
While each side says the other is included in their approach, the terms function as shibboleths.
- collective, sharing, relationships, inclusion
- public goods
- cultural studies, academics
- generates positive externalities
- pro-government, state management, anti-corporation, liberal
- open, shared
- suspicious of profit, trusts in altruism
- feel threatened by the market “second enclosure”, concentration of ownership
- unlicensed spectrum
- competitive, exclusion
- private goods
- economics, business
- worry about burden of negative externalities being taxed; focus on internalities
- anti-government, pro-corporation, conservative, libertarian
- closed, proprietary
- trusts in profit, suspicious of altruism
- feel threatened by loss of property rights implicit in commons rhetoric – “theft”
- licensed spectrum
Commons and markets seem to function both as frames and as signaling devices. They’re frames because they each highlights certain aspects and suppress others; and they function as signals because someone who talks in terms of (say) commons will be trusted on a broader range of socio-political issues. Commons is a code for signaling a left-leaning political perspective; markets ditto for the right.
Conceptually they complement each other; commons and markets are like yin and yang. Each needs the other:
Markets need commons
- public goods (defense, clean air) context in which market is embedded
- common knowledge as basis for progress – incentive to publish inherent in limited time patent monopolies
Commons need markets
- farming example: raise sheep on common ground, but sell meat/wool in a market; ditto for lobster fishermen
- academics creating a knowledge commons are paid out of surplus wealth generated by market capitalism (taxes, foundations)
One can see the Internet from either perspective
- common protocols, languages
- commercialization ex VC investment: Yahoo, Google, Amazon, YouTube
That raises the question of what their superset might be. A possible containing frame for commons and markets is “decentralized coordination.” This is itself part of another dichotomy: centralized vs. decentralized coordination. An example from spectrum policy: wonks who argue about unlicensed vs. licensed allocations would agree that either is an improvement on the traditional “command-and-control” system of administration. Saussure may have been right that meaning comes from difference; in that case, there will never be a single non-contested perspective.
I’m most interested in the nexus: how do commons and markets complement each other, and how do you calculate how much of each you need? To what degree can one formalize the interdepence of markets and commons? One can do a simple calculation for real estate to show that a non-zero percentage of public parks increases property values. I’d love to do the same for spectrum, but haven’t figured out how, yet.
I cut my teeth on the question of mental models used in spectrum policy and cognitive radio research (PDF). I’m now preparing to survey the literature for evidence that people find working with digital artifacts (computers, software, web sites, social network spaces, etc.) difficult because of the differences between the dirt world and the digital world.
The medium term goal is to answer the question “Why is (parallel) programming hard?” The dominant conversations about program hardness today are either technical (e.g. complexity classes) or social (e.g. “wicked problems”). I’m interested in a third perspective: cognitive hardness, which comes in two flavors. Bruce Schneier recently published a good paper on how cognitive biases can affect the security of computer systems. I’m more interested in cognitive capacity rather than cognitive biases, e.g. the number of independent variables we can keep in our head at the same time. These limitations are presumably at the root of folklore about the right (small) number of arguments in function calls, and the number of modules in a project. They presumably also guide programmer’s preferences for simple organizing principles like lists and trees (rather than, say, semi-lattices).
I’m not dismissing social issues; it’s clear that they are at the root of many failures in software projects. However, these problems are by no means unique to programming. I believe that the intangibility and flexibility of software presents a qualitatively different cognitive challenge to most (all?) previous kinds of engineering.
It gets even harder and more important as we turn to parallel programming. Limitations on human working memory will make grasping parallel programs particularly difficult. Programming tools are surely part of the solution, but tools have typically automated and accelerated activities that humans have already mastered. It's an open question whether we can conceive of 1000-core parallel processing sufficiently well to create tools.
The work I want to do on programming has a number of threads:
- Review neuroscience and cog psych literature for clues about limitations that would apply to programming
- Interview programmers to understand what they find difficult, and tie it to #1
- Do some psych (and ideally neuro) experiments on programmers to validate hypotheses
- Evaluate current programming methods and tools against the cognitive limitations we’ve identified to find matches and gaps
- Identify most promising areas for improving programming by taking cognitive limitations into account