Tuesday, August 25, 2009

Spectrum nominalism

James Franklin’s discussion of nominalism vs. realism on the Philosopher’s Zone struck me as relevant to my obsession with “spectrum” as a concept.

“To be realist about some concept is to say that there is such a thing, and it's not just made up by us, whereas to be nominalist, is to say it's just a way of speaking of ours, from the Latin 'nomen' word, just an empty sign. So for example, in the case of forces I was arguing for realism about forces. When you felt them by pressing the fingers together, you would naturally conclude from that that there is such a thing as forces. On the other hand you'd never be tempted to do that with something like the average Londoner. Scientists tell you that the average Londoner has 2.3 children; you're not tempted to think that that's anything except a way of speaking, that there's some individual entity called the average Londoner, that has 2.3 children. So it would be natural to take a realist view of forces, but a nominalist view of the average Londoner. There's this question about all the entities talked about in science. A classic case is numbers, so that's a very difficult one. Are there such things out there as numbers or are they just a way of speaking about the divisibility of things into parts or something?”
I’m a nominalist about spectrum: I believe “spectrum” is just a way of speaking and does not have a referent in world. Most people, on the other hand, seem to be knee-jerk realists – “Of course there's such a thing as spectrum!” – though often when you start digging they become nominalists: “Of course, I don’t just mean frequency, there are lots of other factors…”

This fine philosophical distinction matters: If one takes the realist position, you behave as if there is a resource (spectrum) to be divided up and allocated, which leads ineluctably to radio licenses defined by hard frequency boundaries.

The nominalist perspective offers another way to thinking about the situation – for example, coordinating the operation of radio systems – that is just as valid. From a nominalist perspective the “coordination” view is just as (in)valid a perspective as “resource” view, and radio licenses don’t have to be defined primarily in terms of frequency and geography.

(See also my earlier post “Newton, Leibnitz and the (non?)existence of spectrum”. There the distinction was between the “absolutist” position that time and space are real objects in themselves, or the “relationalist” position that they are merely orderings upon actual objects that do not exist independently of the mind that is making the ordering.)

Saturday, August 15, 2009

How many poor people are there?

Scott Forbes linked me to a thought-provoking 2005 article titled "How not to count the poor"by Sanja Reddy and Thomas Pogge at Columbia University (PDF). The bottom line is that the simple question of how many poor people there are in the world is surprisingly hard to answer.

Reddy & Pogge argue that "[t]he World Bank’s approach to estimating the extent, distribution and trend of global income poverty is neither meaningful nor reliable. The Bank uses an arbitrary international poverty line that is not adequately anchored in any specification of the real requirements of human beings. Moreover, it employs a concept of purchasing power "equivalence" that is neither well defined nor appropriate for poverty assessment. . . In addition, the Bank extrapolates incorrectly from limited data and thereby creates an appearance of precision that masks the high probable error of its estimates." Furthermore: "There is some reason to think that the distortion is in the direction of understating the extent of income poverty."

(A rebuttal by Mark Ravallion at the Bank can be found here.)

Their alternative: construct poverty lines in each country using a "common achievement interpretation". Such poverty lines would use the local costs of achieving universal, commonly specified ends like being adequately nourished. (Ravallion argues this is pretty much what countries already to do create national poverty lines.)

Reddy & Pogge argue that such poverty lines "would have a common meaning across space and time, offering a consistent framework for identifying the poor. As a result, they would permit of meaningful and consistent inter-country comparison and aggregation."

The catch seems to be that such an approach requires one "to carry out on a world scale an equivalent of the poverty measurement exercises conducted regularly by national governments, in which poverty lines that possess an explicit achievement interpretation are developed." This is difficult politically, since a common core conception of poverty will have to be agreed, and financially, since local poverty commissions in each country would have to be funded to construct and update poverty lines over time.

The authors don't claim that their metric would lead to substantially different, or better, policies. Better then, perhaps, to spend money on poverty-focused development assistance rather than improving the metrics. However, the Bank should be more honest about the flakiness of its numbers by at least not reporting them "with six-digit precision".

Saturday, August 01, 2009

Is it, or isn’t it?

Humans are inveterate classifiers. We can’t help ourselves, it seems: we just have to put things in hard-edged categories. Computing might help to blur the edges in a useful way.

An update on the Pluto controversy in New Scientist is a case in point. Discoveries of exoplanets and the anticipation of Earth-size objects in the Kuiper belt make the argument increasingly irrelevant, but yet even professional astronomers seem caught up in arguing for one definition of planets or the other.

Sensory systems like ours are complicated webs of classifiers: whether objects are moving or still, whether movements are animal-like or not, whether something is a face, whether a sound is speech or music, whether someone is a member of our group or not, and endlessly on. Categorization is innate and unavoidable.

But once embedded in culture, it can quickly spiral out into fraught territory. Problems arise because classification has consequences, often monetary, often political. Is that bond AAA or AA? Is that car a clunker or not? Is so-and-so in a special group, or not? Is that judge biased?

The difficulty arises because there are so many parameters that could be used for any classification; people argue about which parameters should count. Does roundness a planet make, or size, or not orbiting around another one, or having swept its orbit clear of other rocks? Cognitive limitations (the four-or-less rule, see e.g. Halford et al. (2004), “How many variables can humans process?” Psychological Science, 16, 70-76) mean that we end up picking a few criteria from the many – too few. And then we require that each criterion must yield a yes/no result, which even for hard science classifications can be contentious: what does it mean for a planet candidate to have “a nearly round shape”?

Computing can help by allowing many more criteria into the mix, and allowing them to vary continuously. This is an application of Edward Tufte’s design strategy “to clarify, add detail,” which he introduces in Envisioning Information (1990, p. 37) with the example of The Isometric Map of Midtown Manhattan. Human nature means we may be a little uneasy with the result, but perhaps we can learn to live with it; most people are comfortable nowadays with weather forecasts that say there’s a 50% chance of rain tomorrow (although many may not actually understand what it means ...).

Hiding the criteria has its own dangers. As Bowker and Star argue in Sorting things out: classification and its consequences (1999), any classification encodes a world view, and even “simple” classification systems succeed in making themselves invisible.

Still, with a little more computing we could, in response to the question “Is it, or isn’t it?” answer in a rigorous way, “Ish.” Computers can handle composing dozens or hundreds of continuous criteria in ways our (conscious) brains cannot.