Wednesday, September 27, 2023

Fear Orgs not Bots

 Tom Gauld's delightful cartoon "Department of Machine Poetry Research" shows a room full of robots at rows of desks. Two people are looking in from a door at the back, with one saying to the other, "The poetry is absolutely dreadful, but anything that distracts them from rising up and enslaving us has to be a good thing." It beautifully captures both our fear of automation and our inclination to over-attribute agency to machines rather than the organizations in which they're embedded.

Anthropomorphism focuses our attention on human-sized antagonists, whether they're humanoid robots or human-sounding chatbots. Robots like self-driving cars certainly have agency and we should take account of their individual actions. I'm not sure that GPT-4 has agency (I don't believe the software adapts to its environment in a persistent, long-term way, let alone have intentions) but its ilk may develop it. 

However, the organizations that created these tools certainly have agency: they invested billions of dollars with a plan in view, and are adapting their behavior to their competitors, users, and regulators. The resulting assemblages of people, technology and protocols (for example, Tesla Inc. rather than individual Model 3's, or OpenAI Inc. rather than an instance of GPT-4 responding to my prompts; and a layer up, the national and international market/regulatory structures) are actors that are far more potent than individual bots.

It's not either-or. One can have agency at different levels of analysis (cf. Floridi & Sanders' levels of abstraction in On the Morality of Artificial Agents). The most useful level(s) will vary depending on circumstance. In the case AI policy, I suspect it's more useful to consider the organizational assemblages, i.e., the ogregores, than the artefacts.



No comments: