Wednesday, October 12, 2022

Degrees of Agency

There are many ways to define agency. The ones I’ve seen provide criteria for deciding whether an entity is or is not an agent. However, once it’s established that something is an agent it would be useful to assess degrees of agenthood. 

Some criteria for agency

The tests for agency I’ve seen give yes/no answers to the question of whether something is an agent. Here are some from the literature (references at the bottom of the post): 

  • Floridi & Sanders (2004): “Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given [Level of Abstraction].”
  • List & Pettit (2006): “[To] count as an agent … four conditions are individually necessary and at least close to being jointly sufficient. … : First, the system forms representational and goal-seeking states, for example, beliefs and desires, or judgments and plans. Second, in forming and revising these representational and goal-seeking states, the system satisfies appropriate conditions of (theoretical) rationality. … Third, the system acts or intervenes in the world on the basis of its representational and goal-seeking states, as conditions of (practical) rationality require; it acts so as to realize its goals, under the guidance of its representations. Fourth, the system exhibits these properties not just accidentally or contingently, but robustly—that is, not just in actual conditions, but also in a class of relevant possible conditions.”

  • Barandiaran, Di Paolo & Rohde (2009): “We identify three conditions that a system must meet in order to be considered as a genuine agent: (a) a system must define its own individuality, (b) it must be the active source of activity in its environment (interactional asymmetry), and (c) it must regulate this activity in relation to certain norms (normativity).” 
  • List & Pettit (2011): “An ‘agent’, on our account, is a system with these features: it has representational states, motivational states, and a capacity to process them and to act on their basis.” [note 1]

The rest of this post describes a few ways to explore degrees of agenthood:

  • Amplitude (turned-on-ness)
  • Criterion richness
  • Levels of agency
  • Hierarchy
  • Moving matter, money, and minds
  • Pyramids
  • Other agent attributes

Amplitude (turned-on-ness)

I see agency as a capacity or potentiality that comes and goes—like being awake, alert, or conscious—rather than a permanent condition or essence—like being alive or being human. (As a non-Spanish speaker, it reminds me of the difference between the “to be” verbs estar and ser, cf. ThoughtCo.) Therefore, a system’s agency can vary by the degree to which it’s “turned on,” just as an animal’s alertness can vary with time or day, or its health.

Criterion richness

The agency definitions listed above all test whether a system meets various threshold criteria. However, a system could meet a criterion to various degrees. Regarding Floridi & Sanders’ interactivity criterion, say, it could be sensitive to few or many stimuli, its sensitivity could be crude or subtle, and its change of state could be binary or very nuanced. (The dichotomies are shorthand for a range of responses.) Greater richness in meeting a criterion demonstrates more sophisticated agency.

One can perform this exercise for all the criteria. For example, consider List & Pettit’s (2011) criteria of having “representational states, motivational states, and a capacity to process them and to act on their basis.” The representations and/or motivations can be crude or subtle, the processing capacity can be limited or extensive, and the resulting actions could be simple or complex. Regarding Barandiaran, Di Paolo & Rohde (2009), for example, the interaction asymmetry could be small or large, and the system’s ability to regulate its interface with the environment could be restricted or considerable.

Levels of agency

The criteria for agency listed above roughly correspond to different kinds of agency:

  • Minimal agency: cf. Floridi & Sanders, for whom a simple computer consisting of matchboxes each containing beads that improves its tic-tac-toe play can be an agent; and Barandiaran, Di Paolo & Rohde (2009), for whom a bacterium performing metabolic-dependent chemotaxis is an agent.

  • Intentional agency: cf. List & Pettit (2011), for whom a small robot that can identify and put cylindrical objects upright is an agent.

  • Reasoning agency: cf. List & Pettit (2006) which includes a rationality condition omitted in List & Pettit (2011). The paradigmatic examples are humans and collegial courts.

This categorization resembles Aristotle’s three kinds of soul corresponding to his three kingdoms of life (plants, animals, and humans): the vegetative, sensitive, and intellective soul (cf. Filosofia do Início).

One can add more levels by considering moral agency and fitness to be held responsible. I’m not ready to venture into such deep water yet. [note 2] 

Hierarchy

Agents of the same type may be in a hierarchical relationship, with some having broader effects/scope than others. For example, a CEO and a sales assistant are both human agents, but the CEO can direct the sales assistant but not vice versa. As examples of group agents in a hierarchical relationship, consider a supreme court, appeal courts, and courts of first instance. 

The group agent at a higher level may not have more power(s), though. In the U.S., the federal government has more power overall than state governments (though there are areas where states have primacy). The European Union, however, seems less powerful than the national governments.

Moving matter, money, and minds

The “agentiality” of an entity is related to its ability to affect its environment, that is, to exert power. I’ve been thinking about three kinds of power: the ability to move matter, money, and minds. 

  • Bulk carriers in a competitive market like shipping companies move a lot of matter but may not have significant social impacts. They may also not be particularly profitable (cf. CSIMarket).

  • Companies that generate a lot of money and/or jobs can be influential not only in terms of moving matter but also moving minds (e.g., influencing politicians). Examples include the financial and retail sectors.

  • A few companies make money because they can move minds, e.g. Alphabet and Meta through their behavioral advertising technologies. 

  • A few, like Amazon, check all three boxes.

Power pyramids

If one imagines a pyramidal hierarchy, the agent at the top (the CEO, say) could have a lot of agency, and the many individuals at the bottom (the workers) have only a little. 

If the individuals at the bottom can form a group agent, however, they might exert more agency than the one at the top. A revolution toppling a monarch is an obvious example. A more contemporary version is a user revolt, such as when Digg users posted the crack key for high-definition DVD digital rights management, or recent pushback on Instagram’s attempt to become a TikTok clone.

Other agent attributes

There are other system features whose degree of presence could distinguish between different kinds of agents.

Coherence. When an agent is an aggregate of agents who can act in their own right—that is, a group agent—the harmony or conflict between the sub-agents can make the resulting agent more or less potent. However, it could go either way. On the one hand, one might find that “a house divided against itself cannot stand.” On the other, diversity could lead to a more resilient or adaptable system: “E pluribus unum.” The presence or absence of coherence during decision making may not matter if the group agent can act in a unified way once a judgment has been made.

Size. It seems reasonable that bigger agents are more powerful than smaller ones. Size can be measured along many dimensions, including the ability to move matter, money and minds. Germany is evidently more powerful than Liechtenstein. On the other hand, smaller agents may be more agile and have the element of surprise (David v. Goliath, SARS-CoV-2 v. Homo sapiens).

Setting. This refers to the context in which agent’s attributes play out. [note 3] Other things being equal, humans have more effective agency on land than in the water. Corporations may be more powerful in lightly regulated, free market economies than in totalitarian states.


Notes

[1] In a review of this book, Raimo Tuomela describes these as “rather uncontroversial criteria for agency.” This bears out the claim in List & Pettit (2006) that their criteria there “reflect a broad consensus in psychology, economics, and the philosophy of mind.”

[2] Opinions about, and even definitions of, moral agency differ widely. For example, List (2021) defines a moral agent as a List & Pettit (2011) agent “with the capacity to make normative judgements about its choices—judgements about what is right and wrong, permissible and impermissible—and to respond appropriately to those judgements.” An agent is “fit to be held responsible” if it is a moral agent that also meets conditions of knowledge and control. On the other hand, Floridi & Sanders (2004) have a much lower bar for moral agency. For them, “any agent that causes good or evil is morally accountable for it”; to be morally responsible, it further “needs to show the right intentional states.”

[3] Floridi & Sanders contend that agency depends on what one can observe—their so-called Level of Abstraction. The setting I’m concerned with here is not a matter of observables or abstraction—it influences the degree but not the presence of agency.

References

Barandiaran, X. E., Di Paolo, E., & Rohde, M. (2009). Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-temporality in Action. Adaptive Behavior, 17(5), 367–386. https://doi.org/10.1177/1059712309343819

Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

List, C., & Pettit, P. (2006). Group agency and supervenience. Southern Journal of Philosophy, 44, 85–105. https://doi.org/10.1111/j.2041-6962.2006.tb00032.x

List, C., & Pettit, P. (2011). Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford University Press.


No comments: