"in this world, there is one awful thing, and that is that everyone has their reasons" --- attrib. to Jean Renoir (details in the Quotes blog.)
Thursday, January 22, 2009
Evolved to revere teachers
I’ve been keeping an ear on an interview with an Aikido teacher that S. has been listening to. Shaner Sensei frequently points out and marvels at the insights and skills of the teacher that founded this particular branch of the art.
The meditation technique I’m learning is also built around a charismatic teacher, in spite of himself; he keeps rejecting “gurudom”, and focuses attention on the practice. But this teacher, in turn, deeply and publicly reveres the teachers that preceded him.
A predisposition to teacher-reverence is probably in-born. It’s easy to construct an evolutionary biology Just So Story to explain it. Learning is clearly adaptive, and our genes encourage us to engage in it by making learning pleasurable. Since one learns better when one trusts the teacher, our genes predispose us to revere teachers and put them on pedestals.
Like all behaviors, this has risks as well as benefits. Along with the ability to surrender ourselves to a teaching that brings benefit, comes a proclivity to give ourselves over to people who lead us into evil or oblivion. The problem is that it’s hard to know where a path leads when embarking on it. If one really knew the end-point, one would already have completed the journey. Teacher reverence will therefore continue to beckon us onto both good paths and bad.
This argument goes through with minimal changes for leader-worship. It probably also has a basis in evolutionary fitness, and is also double-edged. I wonder whether one is related to the other? Both are based in respect for a leader, though the purposes (learning and inter-group conflict, respectively) are different.
Note: The image above is a statue of the founder Swami Vishnu-devananda at the Sivananda Ashram Yoga Ranch. It comes from slide show in a story on spiritual retreats in upstate New York by Shivani Vora, “The Simple Life”, The New York Times, 12 December 2008
FCC Reform paper
My recent posts on reforming the FCC (here, here, here and here) culminated in a short paper for a conference on this topic in DC on 5 January 2009. I also spoke on one of the panels (video).
Sunday, January 18, 2009
Voting within the Margins
Al Franken seems (for now, at least) to have won the Minnesota Senatorial election by 225 out of a total of about 3 million ballots cast: a margin of 0.0001, or 0.01%.
This margin of error is tiny; it's of the same order as the difference in length of your car between a day that's freezing and one that's in the 80's. (See here for steel's coefficient of thermal expansion if you want to check my math.)
This is so small that the result is a toss-up for all practical purposes. Presumably, however, society cannot accept that election results are random; we have to pretend that certainty can be had.
The margins of error of the voting process are sometimes larger than the margin of victory of the winner; this was certainly the case in Minnesota. Philip Howard of the University of Washington found seven such cases in the 2004 elections ("In the Margins: Political Victory in the Context of Technology Error, Residual Votes, and Incident Reports in 2004," 1/6/2005, PDF). He used three ways of thinking about error in an election: technology error, residual votes, and incident reports. For example, Howard cites a 2000 Caltech/MIT which found that the error rates for a large variety of vote counting processes were all 1% or more. (Recall that the margin of victory in Minnesota was one-one hundredth of this: 0.01%) He concludes: "In each case, the electoral outcome was legitimated by elections officials, not the electorate, because in very close races the voting process cannot reveal electoral intent."
In Minnesota, with all the recounts, many of those errors were removed. But there are many kinds of randomness in an election beyond the measurement: someone absent-mindedly ticking the wrong box, someone else deciding at random not to vote on a given day, or people who mistake one candidate for another. In the end, we just don't know the answer, and a coin toss (whether overt or hidden) is a fine way to decide the result. If it was a bad choice, the electorate can throw the bum out next time.
This margin of error is tiny; it's of the same order as the difference in length of your car between a day that's freezing and one that's in the 80's. (See here for steel's coefficient of thermal expansion if you want to check my math.)
This is so small that the result is a toss-up for all practical purposes. Presumably, however, society cannot accept that election results are random; we have to pretend that certainty can be had.
The margins of error of the voting process are sometimes larger than the margin of victory of the winner; this was certainly the case in Minnesota. Philip Howard of the University of Washington found seven such cases in the 2004 elections ("In the Margins: Political Victory in the Context of Technology Error, Residual Votes, and Incident Reports in 2004," 1/6/2005, PDF). He used three ways of thinking about error in an election: technology error, residual votes, and incident reports. For example, Howard cites a 2000 Caltech/MIT which found that the error rates for a large variety of vote counting processes were all 1% or more. (Recall that the margin of victory in Minnesota was one-one hundredth of this: 0.01%) He concludes: "In each case, the electoral outcome was legitimated by elections officials, not the electorate, because in very close races the voting process cannot reveal electoral intent."
In Minnesota, with all the recounts, many of those errors were removed. But there are many kinds of randomness in an election beyond the measurement: someone absent-mindedly ticking the wrong box, someone else deciding at random not to vote on a given day, or people who mistake one candidate for another. In the end, we just don't know the answer, and a coin toss (whether overt or hidden) is a fine way to decide the result. If it was a bad choice, the electorate can throw the bum out next time.
Saturday, January 17, 2009
William James, consciousness, and the non-existence of spectrum
We've just started another wonderful Teaching Company course: Daniel Robinson on Consciousness and Its Implications. He quoted from William James's essay Does "Consciousness" Exist? (1904), which reminded me of my spectrum preoccupations:
A simple example: in the white space proceeding, the FCC specified different maximum transmit powers for different kinds of unlicensed radios, but required that they all avoid wireless microphones using the same detection sensitivity. This doesn't make engineering sense, since the radius of interference for weak radios is smaller, and they therefore do not need to detect microphones at the same range as strong radios. Their detection therefore doesn't have to be as sensitive. A more efficient alternative would be for the unlicensed radios to vary their detection sensitivity depending on their transmit power. The "usable spectrum" is therefore a function of the behavior of the radios concerned, and not just frequencies.
In a similar vein, the boundary between "spectrum licenses" is not really -- or not just -- a frequency, as it might at first sight appear. (Let's leave aside geographical boundaries.) There is no sharp edge, with a radio allowed to transmit any power it wishes "inside its spectrum", and none at all "outside". Instead, there's a gradation of decreasing power for increasing frequency difference. There isn't a boundary where one thing ends and another begins; rather, the boundary is a behavior. This underlines that spectrum, like consciousness for Henry James, isn't an entity, but rather a function.
"I believe thatThe distinction between entity that doesn't exist, and a function that does, applies equally well to spectrum. (I outlined my argument regarding spectrum in Newton, Leibnitz and the (non?)existence of spectrum; for more detail, see my article De-situating spectrum: Rethinking radio policy using non-spatial metaphors.) To mash-up William James:consciousness,when once it has evaporated to this estate of pure diaphaneity, is on the point of disappearing altogether. It is the name of a nonentity, and has no right to a place among first principles. ... For twenty years past I have mistrustedconscousnessas an entity: for seven or eight years past I have suggested its non-existence to my students, and tried to give them its pragmatic equivalent in realities of experience. It seems to me that the hour is ripe for it to be openly and universally discarded.
"To deny plumply thatconsciousnessexists seems so absurd on the face of it — for undeniablythoughtsdo exist — that I fear some readers will follow me no farther. Let me then immediately explain that I mean only to deny that the word stands for an entity, but to insist most emphatically that it does stand for a function." (My italics.)
To deny plumply that "spectrum" exists seems so absurd on the face of it — for undeniably "signals" do exist — that I fear some readers will follow me no farther. Let me then immediately explain that I mean only to deny that the word stands for an entity, but to insist most emphatically that it does stand for a function.In other words, the proper subject of both psychology and wireless regulation is behavior and its results. This becomes all the more important as radios become more sophisticated.
A simple example: in the white space proceeding, the FCC specified different maximum transmit powers for different kinds of unlicensed radios, but required that they all avoid wireless microphones using the same detection sensitivity. This doesn't make engineering sense, since the radius of interference for weak radios is smaller, and they therefore do not need to detect microphones at the same range as strong radios. Their detection therefore doesn't have to be as sensitive. A more efficient alternative would be for the unlicensed radios to vary their detection sensitivity depending on their transmit power. The "usable spectrum" is therefore a function of the behavior of the radios concerned, and not just frequencies.
In a similar vein, the boundary between "spectrum licenses" is not really -- or not just -- a frequency, as it might at first sight appear. (Let's leave aside geographical boundaries.) There is no sharp edge, with a radio allowed to transmit any power it wishes "inside its spectrum", and none at all "outside". Instead, there's a gradation of decreasing power for increasing frequency difference. There isn't a boundary where one thing ends and another begins; rather, the boundary is a behavior. This underlines that spectrum, like consciousness for Henry James, isn't an entity, but rather a function.
Sunday, January 11, 2009
Factoid: Vista is worth negative-$150
According to Silicon Valley Insider, Dell is now charging customers $150 to downgrade from Vista to Windows XP. According to the story, Dell started charging customers an extra $20 to $50 for a downgrade to Windows XP in June, and by October, Dell's XP premium was up to $100.
Not a happy product if people will pay large amounts to avoid having to buy it.
And I thought that old joke about "first prize is a week in Palookaville, and second prize is two weeks there" was just a joke. . .
Not a happy product if people will pay large amounts to avoid having to buy it.
And I thought that old joke about "first prize is a week in Palookaville, and second prize is two weeks there" was just a joke. . .
Saturday, January 10, 2009
Forever blowing bubbles
In a Wall Street Journal op-ed (PDF) Paul Rubin* suggests that bubbles and crashes are a natural part of capitalist markets. More to the point, the very factors that have recently increased the efficiency of markets – notably the internet – have also facilitated the formation of bubbles.
Technology is double-edged, as always: the internet facilitates both the functioning and malfunctioning of markets. Of course, the difference between function and malfunction is in the eye of the beholder. As John Sterman famously said, “There are no side effects—only effects.”
While this is a perennial problem, the internet may have caused a qualitative change in the degree of interconnection, which leads to significantly less resilience. Note the paradox: The internet is more resilient as a communication system, but it causes the systems that use it to be less resilient.
The corollary is that regulators face an impossible task: one can’t eliminate the downsides of the internet without simultaneously eliminating the benefits.
My study of the complex adaptive systems literature leads to the same conclusion:
[*] Paul Rubin is a professor of economics and law at Emory University and a senior fellow at the Technology Policy Institute. He served in a senior position in the Federal Trade Commission in the 1980s.
Technology is double-edged, as always: the internet facilitates both the functioning and malfunctioning of markets. Of course, the difference between function and malfunction is in the eye of the beholder. As John Sterman famously said, “There are no side effects—only effects.”
While this is a perennial problem, the internet may have caused a qualitative change in the degree of interconnection, which leads to significantly less resilience. Note the paradox: The internet is more resilient as a communication system, but it causes the systems that use it to be less resilient.
The corollary is that regulators face an impossible task: one can’t eliminate the downsides of the internet without simultaneously eliminating the benefits.
My study of the complex adaptive systems literature leads to the same conclusion:
- more interconnected systems are less resilient
- crashes are healthy, because they allow new entrants to flourish
- the regulatory task is not to avoid crashes (this just makes the eventual correction worse) but to manage them
[*] Paul Rubin is a professor of economics and law at Emory University and a senior fellow at the Technology Policy Institute. He served in a senior position in the Federal Trade Commission in the 1980s.
Subscribe to:
Posts (Atom)