Monday, December 11, 2023

Ogregores & Morality

To many people, moral considerations apply to any sentient being, e.g., to animals. Ogregores probably aren’t sentient, but do they have moral status as moral agents or moral patients?[1] I’m skeptical that they’re moral agents but intrigued by the possibility that they might be moral patients. 

Ogregores, as regular readers know, are bounded techno-social structures that respond intelligently to opportunities and threats. They are composed of people, tools, and procedures. Rough synonyms include organizations, institutions, or artificial agents. Examples include corporations and government bureaucracies.

What is a moral agent?

Most people seem to accept that most human adults are moral agents. Beyond that, it gets tricky: there many, often hotly debated, criteria for moral agency.[2] It’s often tied to an individual’s ability to make choices based on “right and wrong.”

My working hypothesis is that morality is a subjective experience, and ascribing morality is an exercise in analogy that goes something like this:

  • Premise: Some decisions I make feel freighted with considerations of Right and Wrong.[3] 
  • Conclusion: I’m a moral agent.
  • Premise: I’m human.
  • Conclusion: moral agency is a property of humans.
  • Premise: Alice is human (or alternatively, “like me” in “relevant” ways).
  • Conclusion: Alice is a moral agent. 

There are several problems with this approach. First, just because someone is human doesn’t mean they’re necessarily a moral agent. Children and people suffering from severe mental impairments aren’t considered to be moral agents.[4] Second, trying to circumvent the “human” test by replacing it with a “like me” test raises the question of what should be compared, and what counts as similarity. 

Third, this approach to ascribing moral agency entails “looking inside the box,” that is, assessing that another entity’s internal state and capabilities are similar to mine. It’s hard to know what other people’s internal state is; it’s extremely difficult to with animals, and probably impossible with entities whose internal state has few if any analogies to our own, like software or corporations. 

I’m inclined to focus on behavior rather than internal state. Attention to behavior rather than internal states (notably, sentience) is attractive since it’s hard to assess the internal experience of other entities. It may be warranted since morality is often concerned with both behavior and the internal states that prompt that behavior. That leads me to this analogy for a moral “actor” or moral agent* where the asterisk marks that this isn’t a conventional definition of an agent. 

  • Premise: Alice behaves as if they have a notion of Right and Wrong similar to my own.
  • Conclusion: Alice is a moral actor (or moral agent*).

Appropriate behavior includes both action in a morally freighted situation and/or dialogue with Alice about what constitutes moral action. Alice need not be human. However, it could be tricky to be confident that an exchange with a Large Language Model constitutes dialogue.

It’s probably more effective as a matter of social policy to focus on influencing behavior rather than trying to guess whether entities are moral agents—certainly for ogregores, and perhaps for people too. 

Ogregores as moral agents

Christian List contends that at least some corporate entities have moral agency, e.g., ones with “procedures and mechanisms in place that allow them to make corporate-level judgements about what is permissible and impermissible, and to act on those judgements” (List, Group agency and artificial intelligence, 2022: 1228). “Indeed,” List continues, “many organizations have compliance departments and ethics committees.”

I’m skeptical. First, compliance seems irrelevant. Compliance with legal requirements or industry standards doesn’t make the complier a moral agent, any more than deciding not to steal something in order to avoid punishment makes me a moral agent. Second, while ogregores are agents, just because they are based on humans and contain humans does not make them moral agents. I’m not persuaded that corporations are moral agents merely because they have “procedures and mechanisms in place that allow them to make corporate-level judgements about what is permissible and impermissible”. 

Some scholars define moral agents in terms of the duties they have, not the treatment they are entitled to by virtue of their subjective states. Syd Johnson, for example, contends that, “Moral agents, then, are not the bearers of unique rights or privileges. Rather, they bear moral responsibilities and duties to others, simply by virtue of the fact that they can bear those responsibilities and duties.” 

This definition allows for ogregores to be moral agents. Johnson  argues that “when those duties involve actions that cannot be performed by individuals, and require collective action, we must either view collective entities and groups as moral agents, or we must conclude that no one has moral responsibility” (Shifting the Moral Burden: Expanding Moral Status and Moral Agency, 2021). Observe that a duty-oriented definition does not require moral states of mind, and thus does not require the entity to be human. Responsibilities and (especially) duties move this definition close to the behavioral criterion for a moral actor or moral agent* introduced above.

Floridi & Sanders (On the Morality of Artificial Agents, 2004) make the distinction that “any agent that causes good or evil is morally accountable for it” whereas “the agent needs to show the right intentional states” to be “morally responsible.” By the first definition, a hurricane is morally accountable, which I find counter-intuitive. Since it's unlikely that we can make much sense of the intentional states of non-human agents, especially artificial ones like ogregores, the second definition doesn't provide a guide for deciding whether artificial agents are morally responsible.

Bottom line: Ogregores could be moral agents on some definitions.

Ogregores as moral patients

Rewards and punishments for ogregore behavior could be morally undesirable if ogregores are moral patients, i.e., worthy of moral consideration. (They could also be less controversially undesirable if they’re ineffective.)

Most people don’t consider animals to be moral agents, but do see them as moral patients, that is, entities deserving of moral consideration.[5] The status of animals seems to be tied to the belief that they can suffer, i.e., have some form of sentience; but maybe could also be just because of complexity.

So, do ogregores suffer? Few would assert that they have subjective experience or sentience. They can certainly “suffer harm,” however. Corporate raiders, asset strippers, and leveraged buy-out funds can damage corporations in their own interests.[6] Private equity involvement in healthcare has been alleged to harm patient care, employee conditions, and the long-term financial health of these organizations.[7] 

The harm to an enterprise is usually assessed in terms of harm to people like employees or patients. Harm to the ogregore as such is secondary since it is usually assumed that only harm to humans (or perhaps other sentient beings) matters. However, damage to the non-human parts of an ogregore may eventually lead to human harm, such as when a country’s infrastructure is destroyed in war, or when a company’s staffing is reduced so much that its patients or customers suffer, or when protocols are changed so much that bureaucracies can no longer function.[8] In such cases, the ogregore is harmed through cumulative harm to its constituent people.

This argument may be easier to make in Europe than in the United States. US antitrust law focuses on harm to consumers, such as through higher prices, reduced output, or reduced quality. In contrast, EU competition law is concerned with preventing the abuse of a dominant position even if there is no immediate or direct harm to consumers. This means that predatory pricing or exclusive dealing can be considered (GPT-4 via Poe, Nov 2023). EU analysis takes impacts on competitors (i.e., ogregores) into account, not just impact on consumers.

Update 12 Dec 2023: added paragraph on Floridi & Sanders.

Endnotes

[1] A moral patient is an entity that has moral rights and can be wronged but may not necessary be capable of making moral judgments or taking moral actions itself.

[2] According to the Routledge Encyclopedia of Philosophy, they range from having the capacity to conform to external requirements of morality like obeying laws against murder, to acting out of altruistic impulses, to using reason to rise above feelings and passions (Haksar, Moral Agents). According to Wikipedia, “Moral agency is an individual's ability to make moral choices based on some notion of right and wrong and to be held accountable for these actions. A moral agent is ‘a being who is capable of acting with reference to right and wrong.’” (https://en.wikipedia.org/wiki/Moral_agency, accessed 19 Nov 2023) See also https://en.wikipedia.org/wiki/Moral_responsibility#Artificial_systems. One common theme is that moral agency requires consciousness, i.e., the capacity for inner subjective experience (Himma, Artificial agency, consciousness, and the criteria for moral agency, 2009). A GPT-4 summary suggests that moral agency includes consciousness/sentience, reasoning ability, autonomy, and an understanding of responsibility (GPT-4 via Poe, Nov 2023). 

[3] At the risk of circularity, I believe that “right and wrong” are words that describe that feeling associated with making moral decisions.

[4] I’m conflating terms from two different fields here: fitness to be held responsible in the law, and moral agency in philosophy. I’m suggesting that not being capable of acting with reference to right and wrong—not being a moral agent—means the person is not fit to be held responsible. The inverse does not necessarily follow. A young child might understand that stealing is wrong (making them a moral agent to some extent), but they might still be considered too young to be held legally responsible for theft under the law.

[5] Susan reminded me of the example of a dangerous dog destroyed after killing a child. Many people would say that the child’s death was not the dog’s fault, but its owner’s. Another example: a bear mauls a person; the bear is destroyed. Here there’s no owner to blame; it’s “death by natural causes,” like a lightning strike, even though the bear is held responsible. Nowadays non-human animals are deemed to lack moral agency and aren’t held culpable for their acts, though Wikipedia reports that animal trials took place in Europe from the 13th to 16th century.

[6] GPT-4 gives several examples, including the leveraged buyout of RJR Nabisco by KKR in 1988; Toys "R" Us being taken private in a leveraged buyout in 2005; Sam Zell’s 2007 leveraged buyout of the Tribune Company; the management of Sears Holdings by Eddie Lampert; and Carl Icahn’s involvement in Hertz (GPT-4 via Poe, Nov 2023)

[7] A New Yorker story alleged that when private-equity firm Paladin bought the Hahneman Hospital in  Philadelphia, the most vulnerable patients bore the cost; see also Wikipedia. Advocates allege that Envision Healthcare, owned by KKR, has played a starring role in America’s surprise medical billing problem (source: Private Equity Stakeholder Project). YouTube comic Dr Glaucomflecken does scathing takes on the role of private equity in US healthcare (source: Glaucomflecken.com; see, e.g., one of his skits).

[8] War reparations suggest that a country is being punished for the suffering of another country. The punishment bears more or less directly on people, of course, but the primary calculus is geopolitical.

No comments: