Sunday, December 22, 2019


I think of experiment as a reasoned procedure to learn something by trying out a series of variations. (Wikipedia notes the importance of "repeatable procedure and logical analysis," for example.) I enjoy baking beskuit (South African rusks; think Boere-biscotti) because it's an endless experiment: with every batch, I try something new.

I've recently had a few batches, though, where the experiment was inadvertent; in one case I forgot to add any sugar at all (edible, but didn't rise very much), and in another I'd run out of plain flour so I used sieved whole-wheat flour (success; will try the sieving trick again). It wasn't trial-and-error, just plain error, but it still yielded interesting results. I'm sure there's a word for that -- the phenomenon is certainly well known, cf. Fleming and penicillin -- but since I don't know it, I'll just call them unsperiments.

Experiment is at the heart of the scientific method. However, many people dislike trial and error. It's rather vulgar compared to theory, and doesn't explain as much; it gives you an answer without imparting much understanding. A striking recent example was Jonathan Zittrain's jeremiad about "The Hidden Costs of Automated Thinking" in the New Yorker. He inveighs against the triumphy of data over theory in machine learning: "This approach to discovery—answers first, explanations later—accrues what I call intellectual debt."

Even though most machines learn by running lots of trials against data, I'm sure there are groups working on ways to include explanatory theories in AI. However, "doing" unsperiments is harder, since (if Pasteur is right), chance requires not just a mind, but a prepared one. Today's AI's aren't minds, and preparing even a human mind to be open to unsperiments isn't easy.

No comments: