Wednesday, November 23, 2022

Defining Agency

Paul Diduch asked me whether I think weather systems like hurricanes have agency. The short answer is “No.” Giving a longer answer allows me to explore what I want the concept of agency to do, since there is no consensus on its definition even within disciplines like computer science or philosophy, let alone between them. 

(There’s a list of references at the bottom, followed by an appendix summarizing some of the criteria I’ve come across so far.)

I’m using agency as a proxy for human-like behavior. Very roughly, an agent is an entity that chooses to do something. Before I give my current criteria, some preliminary notes:

  • I’m going to do my best to rely only on external observation. That is, I’ll try to avoid commitments to the existence and content of an entity’s internal states such as perceptions or motivations.  
  • I’d love to avoid the anthropocentrism inherent in my terminology, but I don’t see a way around it.
  • I assume that there are non-human (and more broadly, non-biological) agents. 
  • There are affinities between agency and concepts like life, consciousness, and intelligence, which also have contested definitions that roughly mean “something like us.” I don’t think it’s a coincidence.
  • Most approaches imply a range of agency. Usually only the minimal criteria are given, and greater capability is assumed to be open-ended; see my post Degrees of Agency for more.

Criteria for agency

My inventory of agency definitions keeps growing (see appendix below). There is no consensus list of criteria but there are frequent overlaps. Based on some of the more common attribute groups, here’s my current working checklist (terms in bracket from the criteria lists in the appendix): 

An entity that differentiates itself from an environment (individuality, self-defined individuality)

… that acts on its own terms (autonomy, self-caused action, proactive behavior)

… responding to the environment (interactivity, responsiveness, reactive behavior)

… and adapting to it (adaptability, adaptivity)

In short:

1. Differentiated

2. Autonomous

3. Interactive

4. Adaptive

This list draws most heavily from Floridi & Sanders (interactivity, autonomy, adaptability), and Barandiaran et al. (self-defined individuality, self-caused action). Note, however, that Floridi & Sanders test their criteria by reference to change of the entity’s state. In contrast, I am judging by observed behavior; I don’t look inside the box.

Choice-making is a criterion implicit in #2-4. I have not given it explicitly in another attempt to avoid a commitment to knowing internal states. Ascribing choice-making requires the observer to believe that the entity was aware of choices, i.e., had an internal representation of them, and selected one over the others. 

Avoiding internal states 

I have constructed this list in part to avoid reference to internal states as much as I can. It’s true that one can observe internal states (like desires and representations of the world) in oneself. One can also—given sufficiently sophisticated measurement devices—observe biochemical, neural, mechanical or software internal states in biological systems and human artefacts. However, such observations are difficult and often impossible in practice. The interpretation of internal states can also be non-obvious, e.g., associating a particular pattern of neuron firing with a specific representation, establishing what a specific set of AI neural net parameters represent, or mapping beliefs of individual employees to a belief of a whole company. I’ll therefore focus on behavior, which is more readily observable.

However, it’s hard to avoid reference to internal states since assessing most if not all these attributes I have chosen requires making judgments about internal states. (Most of the approaches listed in the appendix implicitly or explicitly assume internal representational and motivational states. List & Pettit are the most explicit. Philip Ball’s is perhaps the only one that doesn’t.)

For example, autonomy (#2) implies that one knows whether behavior was caused by the entity itself versus determined by the environment. The most direct way to do this is to know the internal state, and to know how state determines behavior. My escape hatch is to posit that one can use observation to infer the absence of external causation, implying internal causation. 

Continuing down the list, an entity’s action following a change in the environment can only be judged responsive (#3) or adaptive (#4) if it would not have acted absent a change in the environment. This requires either confidence that it perceived the change (changing its internal state) or, again, observational data giving statistical confidence that the action wouldn’t have happened absent the change.  

One might think that the requirement of internal states would be uncontroversial in my target case: ogregores like companies, industries, and other organizations. The contents of the black box are humans and we know they have perceptions and desires. One might thus claim that the ogregore has internal representations of these percepts and goals if only in the minds of its human members and/or in the data they create. However, it doesn’t follow from the existence of separate representations (in the human minds, and various artefacts) that the collective itself has a representation. Thus, I would still prefer to avoid reference to internal states as much as possible.

Intentionality

The most important common criterion from the lists in appendix that I’ve omitted from my checklist is intentionality (aka goal orientation, design objectives, motivational states). I do that to avoid reference to internal states. 

That still allows the intentional stance of Dennett (1987), in which  an intentional system is an entity “whose behavior can be predicted by the method of attributing beliefs, designs, and rational acumen.” The intentional stance doesn’t claim the system actually has beliefs etc. 

On the other hand, goal orientation can be satisfied very simply if appropriately defined. Barandiaran et al. consider a bacterium moving along a chemical gradient to be goal-direct: “interaction is regulated internally and directly linked to processes of self-maintenance.” One could even say that a weather system arranges itself to meet the goal of energy minimization. However, I’m interested in more sophisticated agency where such simple teleology isn’t available.

“Negative” attributes

There are some agent attributes that seem important to me that I haven’t come across in the literature yet, perhaps because they’re negative or weaknesses:

  • Intermittency, fitfulness, fickleness, unpredictability
  • Failure, inadequacy, breakdown, errors

I would like to include them since they signal struggles with trade-offs, dilemmas and compromise which I think are an important part of agency. However, I’ve omitted them since, perhaps even more than the “positive” criteria I’ve used, they entail some assessment of internal state.

Group agency

Some groups of people are said to have collective agency, e.g., by List & Pettit. In addition to the above criteria for agency, I think one also has to require that

  • Group action differs (is distinct or distinguishable) from that of members—the whole has properties not easily inferred from the parts
  • Behavior persists in continuous way as members change—leaders are necessary but not sufficient to explain behavior
For List & Pettit (2011:34), becoming a group agent goes beyond performing a join action. A group of individuals “each intend that they together act so as to form and enact a single system of belief and desire,” that each intends to do their part to achieve this and believe that others will do their part, and that “all of this is a matter of common awareness.” These two criteria don't refer to, or provide ways to test, that a group agent has single, coherent system of belief and desire about its environment. It may need one.

Performance

Agency as a concept puzzles me. Is it something one defines, or something one recognizes? It’s most easily associated with living systems, but it’s not a persistent system property like life. It comes and goes, like wakefulness and consciousness. However, unlike those two, it’s not associated with subjective experience. Susan Tonkin has suggested I think of it as a potential rather than a property, or that it might be useful to consider the essence/accident distinction. In that case, the agency of a system is the maximum of the observed instances of agency.

Influenced by Andrew Pickering, I’m now thinking of agency as performance. Judging a performance requires extended and attentive observation over time, and different audience members might come to different conclusions about its meaning. A stage performance leads an audience to infer internal states which may or may not actually be present.

I’m reading Pickering’s The Cybernetic Brain: Sketches of Another Future (2011) after watching a great lecture suggested by Paul Diduch. Pickering has argued that science studies had to shift from a representational idiom (taking it for granted that science is about representing the world; an exercise of epistemology) to what he called the performative idiom (a decentred perspective concerned with doing things in the world; ontology) (cf. Pickering, 2002). 

So, do hurricanes have agency?

Weather systems are a good test case for agency definitions since they touch on various perspectives that interest me. They’re identifiable systems that exert effects in an environment; they have significant impacts on society; and they’re not biological. They are on the border of what I consider agency.

One could argue that hurricanes, for example, meet my four “positive” criteria, with suitable interpretations of individuality, interactivity, etc. They also meet some of my negative attributes like fitfulness, unpredictability, and failure (suitably understood). They are emergent structures. They have goals, if one admits energy minimization. It's no surprise that storm deities were often top gods in ancient pantheons. For example: Indra, Zeus, Thor, and Tarhun. One could see storms moving across the landscape, and they were fickle and powerful.

One way to rule them out as agents is to say that they don’t have a clear boundary that establishes their identity. Another would be to say that they don’t have internal representations of the environment and their goals; however, I don’t have that option since I reject the “look inside the box” criteria for agency.

My short, instinctive answer to the question of whether weather systems hadagency was “No.” A slightly longer answer in the light of my own criteria would be “Perhaps, but.” They have minimal agency by some definitions, but it’s not the kind of agency that is useful to me. 

Updates

Update 30 Nov 2022: added paragraph about List & Pettit's requirements for group agency.

References

Ball, P. (2020). Life with Purpose. Aeon. https://aeon.co/essays/the-biological-research-putting-purpose-back-into-life

Barandiaran, X. E., Di Paolo, E., & Rohde, M. (2009). Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-temporality in Action. Adaptive Behavior, 17(5), 367–386. https://doi.org/10.1177/1059712309343819

Dennett, D. C. (1989). The Intentional Stance. MIT Press. https://mitpress.mit.edu/9780262540537/the-intentional-stance/

Jennings, N. R., & Wooldridge, M. (1998). Applications of Intelligent Agents. In N. R. Jennings & M. J. Wooldridge (Eds.), Agent Technology: Foundations, Applications, and Markets (pp. 3–28). Springer. https://doi.org/10.1007/978-3-662-03678-5_1

Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

List, C., & Pettit, P. (2011). Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford University Press.

Pickering, A. (2002). Cybernetics and the Mangle: Ashby, Beer and Pask. Social Studies of Science, 32(3), 413–437. https://doi.org/10.1177/0306312702032003003

Wooldridge, M. J., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. Knowledge Engineering Review 10(2) :115–152. https://www.cambridge.org/core/journals/knowledge-engineering-review/article/abs/intelligent-agents-theory-and-practice/CF2A6AAEEA1DBD486EF019F6217F1597 

Appendix: Some criteria for agency

Wooldridge & Jennings (1995), quoted in Wooldridge (2002), “intelligent” computer system agent capable of action in an environment, see also Jennings & Wooldridge (1998)

  • autonomous (able to act without the direct intervention of humans or other agents; control over its own actions and internal state)
  • reactive, responsive (perceives and responds to environment)
  • pro-active (exhibits opportunistic goal-directed behavior and takes the initiative)
  • social (capable of interacting with other computer agents and possibly humans) 
  • in order to meet its design objectives

Brazier, Jonker & Treur, (2000) – characteristics in addition to Jennings & Wooldridge that may be required of some computer agents

  • adaptivity (learns and improves with experience)
  • procreativity
  • intentionality

Floridi & Sanders (2004) “LoA1” – minimal example: an earthquake

  • a system, situated within and a part of an environment
  • which initiates a transformation, produces an effect or exerts power on it

Floridi & Sanders (2004) “the right LoA (level of abstraction)”  – example: a human, the MENACE tic-tac-toe playing system (“viewed at an appropriate LoA”), a futuristic thermostat

  • interactivity (response to stimulus by change of state)
  • autonomy (ability to change state without stimulus) 
  • adaptability (ability to change the ‘transition rules’ by which state is changed)

Floridi & Sanders (2004) moral agent – example: a search-and-rescue dog

  • an agent if and only if it if it can cause moral good or evil (i.e., is “capable of morally qualifiable action”)

List & Pettit (2011) – minimal example: a small robotic device putting cylindrical objects upright

  • representational states
  • motivational states
  • acts in the environment to satisfy its motivations per its representations

List & Pettit (2011:34) group agent

  • each member intends that they all act together to form and enact a single system of belief and desire 
  • within a clearly defined scope
  • each intends to do their own part in a plan for ensuring group agency within that scope
  • they believe that others will do their part too
  • this is a matter of common awareness

Barandiaran, Di Paolo & Rohde (2009) – minimal example: a bacterium performing metabolic-dependent chemotaxis

  • self-defined individuality
  • causes its own actions
  • regulates activity in relation to goals

Philip Ball (2020)

  • ability to produce different responses to identical (or equivalent) stimuli
  • ability to select between them in a goal-directed way

No comments: