Monday, October 04, 2021

Scious organizations

 Many things are said to take on a life of their own: corporations, movements, projects, urban legends, accusations, and so on. (For sample sentences, see The Free Dictionary, Longman, and Merriam-Webster.) I’m going to take this literally and assume that human enterprises – notably digital technologies – are not just alive but also conscious. In this way digital tech is a greater-than-human mythological force, that is, a god.

Introduction

Even though I simply posit that organizations can be conscious and even sentient, this post is an attempt to make that assumption plausible. Since the network structure of collections of people seems to resemble those of neural aggregates in the brain, and since some brains conscious and even sentient, it seems plausible that human enterprises with similar structures might also be.

Data, forests, brains

Caleb Scharf (or at least the publisher’s blurb for The Ascent of Information) claims that “Your information has a life of its own.” Scharf argues that information “is, in a very real sense, alive;” the data we create amounts to an aggregate lifeform with goals and needs that can control our behavior and influence our well-being. I don’t think Scharf claims data is sentient, though the book’s tagline proposes that your information “[is] using you to get what it wants,” which implies some degree of consciousness and certainly agency. Being alive is not much of a stretch, though. Viruses have needs and can influence our well-being. But one can go much further.

Suzanne Simard [1], relying on Aron Barbey’s network neuroscience theory of human intelligence [2] (see also [3]), argues that trees linked by fungal networks between their roots have behaviors with “cognitive qualities” that include perception, learning, and memory. (For a recent argument against this approach, with references to other criticisms, see ) If trees connected by mold can think, how much more so thousands of human brains connected by interpersonal networks?

Social networks with small-world topologies

Simard argues that the small-world network topology of forest root networks is a hallmark of cognition, and thus that forests think. While that’s quite a stretch (small-world networks aren’t limited to thinking systems) one can easily find evidence that connected groups of people have a topology similar to neural networks; this makes sense, since small-world networks support regional specialization with efficient information transfer. 

For example, Google Scholar searches on terms like ‘small-world scale-free networks in corporations’ yield many suggestive papers. Small-world network analysis has become fashionable in social science. Small-world networks have been found in the US corporate elite (2003, 2012), Italian finance, Polish corporate board and director networks, business groups, the US venture capital industry, the internet technical community in China, online discussion groups, and e-mail networks.

There is also an extensive literature on enterprise social networks (h/t Marc Smith) although the work I’ve found so far is more about design and the productivity impacts of digital social networks than network metrics.

Social vs. neural networks

Even if the network structure of human organizations and brains are similar, it might not prove anything. If nothing else, the differences could swamp the similarities. What are the differences?

Topology: nodes and links

A human brain and an organization of humans are very different in terms of their network elements, even if they have topological similarities like small-world and scale-free structures. 

First some rough numbers. 

Let’s assume the neocortex has 20 billion neurons. While only about 0.3% of US firms have over 500 employees, those large companies account for half of employment (BusinessInsider). Still, even the largest employers have only a few million employees, and only a few countries have more than 100 million people, so brains have a 100 to 10,000 times as many neurons as even the largest human collectives have members

Let’s assume each neuron has ~100 dendrites. For inter-human connectivity, let’s use Dunbar’s number of about 150 as a figure of merit; it measures the number stable social relationships people can have. This is similar to the number of dendrites per neuron, so I’ll assume per-node connectivity doesn’t differ by orders of magnitude.

Let’s assume the max bitrate of each neuron connection is no more than the max firing rate of 250-1000 Hz; assuming 1 bit/Hz coding gives per-connection bandwidth ~1 kbps. Inter-human bandwidth is much larger. Looking just at digital connections and leaving aside all the context that isn’t measured by internet speed – a raised eyebrow can speak volumes – we’re talking order 10 Mbps. Thus, inter-human connections are ~10,000 as fast as neuron connections

At this level of hand-waving, the human brain’s neural network has far more nodes than even the largest human collectives, but inter-human connections are much faster than interneuron links. The per-node connectivity is the broadly the same. We might call that a wash, even though the network structures are very different. 

But here’s the kicker: each node in the human network itself contains a brain; it stores vastly more state and can do much more processing than a neuron. That is: the nodes in human networks necessarily have vastly more sophisticated processing capabilities than the nodes in brain networks.

A much more important difference, to my mind, is that organizational networks overlap and interpenetrate in a way that brain networks don’t. Neurons belong to one brain or another, not more than one. (For a caveat, see [4].) On the other hand, the boundaries between greater-than-human networked entities are hard, if not impossible, to draw. Let’s take individual humans as network nodes. Neurons belong to one brain or another; people can belong multiple related networks simultaneously, such as simultaneously being corporate employees, and members of industry bodies and professional associations. Some people in Alphabet’s YouTube division, for example, may be more tightly connected to former colleagues in Snapchat and Instagram than they are to folk in Google Search.

There are also overlaps at higher levels of abstraction such as alliances of organizations. Various divisions in a company may have joint ventures with otherwise competing companies; governments may consist of shifting constellations of political parties; and countries may be members of partially overlapping alliances like NATO, the EU and the Five Eyes.

Scious-ness

Sentience and consciousness are closely related terms with many possible definitions. One distinction that’s sometimes made that consciousness is a step below sentience; that is, that sentience includes an awareness of experience, or experience of the world outside the self.

In order to avoid unintended connotations, I’ll sometimes use the rare and obsolete word “scious” to denote a blurry notion of being conscious and/or sentient. (According to the OED and The Free Dictionary, scious means possessed of or having knowledge, respectively.)

My working hypothesis thus becomes that collective human structures like organizations and industries are scious. This also applies to other collectives like bureaucracies, organized religions, states, political parties, movements, the global financial system, supply chains, markets, in-person, online social networks, etc. The list is endless.

There is a link to, or overlap with, egregores though I don’t know quite what it is. An egregore might be to a collection of people what the mind is to the brain. Dubuis in Fundamentals of Esotericism, quoted by Stavish [5], calls it “the psychic and astral entity of a group.” Stavish himself has described egregores, on a psychic level, as “anthropomorphized images of the concept at hand” [5].

Sub-brains: Scious but oblivious?

It seems plausible that large collections of brains can be scious without being aware of the consciousness of the entities that comprise them; that is, a scious corporation may not be aware of our individual, personal consciousnesses. Conversely, it seems hard for us to be aware of the consciousness of these aggregates. I’ll try to justify these claims by more analogies.

The human brain is said to contain around 86 billion neurons. Biological neural networks tend to be modular; depending on the method used to identify modules, researchers have identified of the order of 30 brain modules. Parrots have up to around 3 billion neurons and appear to be conscious. Many human brain regions could have as many or more neurons than a parrot; could those modules be conscious?

This takes us to the controversial idea of dual consciousness, the notion a person may develop two separate conscious entities within their one brain after undergoing a corpus callosotomy (Wikipedia). There are also several models of multiple consciousness which claim that the brain consists of many independent or semi-independent agents. Michio Kaku, for example, floated the analogy of the brain to a large corporation: the brain is like “a large corporation[:] a huge bureaucracy and lines of authority, with vast flows of information channeled between different offices. But the important information eventually winds up at the command center with the CEO [where] the final decisions are made.” [6]

I can imagine that some of these brain modules could have consciousness, distinct from the whole-brain’s consciousness. I (that is, the unified experience generated by my whole-brain) has no awareness of the consciousness of my sub-brains (pace Carl Jung’s notion of the unconscious and its many archetypes) nor do the sub-brains have any awareness of the consciousness of my whole-brain.

The top-down argument

This post is essentially a bottom-up argument: given a human social network with a topology similar to a conscious neural network, one can infer consciousness by analogy. One could start top-down by observing consciousness in meta-human entities and then try explaining it using concepts used in neuroscience. 

Models of consciousness

There are many ways to explain human consciousness, such as this list of five of the most influential taken mentioned in a NewScientist article: global neuronal workspace; attention schema; predictive processing; integrated information; and orchestrated objective reduction. (See [7] for an excerpt from the article giving a brief description of each.)

At the moment I’m quite taken by Giulio Tononi’s integrated information theory (IIT) of consciousness since it defines a quantitative measure (called phi) of whether, and to what extent, any physical system is conscious; however, it is still a work in progress (Wikipedia). It would be instructive to compare phi for a human brain with phi for human organization. The problem is that the calculation of phi for even a modestly sized systems is said to be computationally intractable. Several heuristics are available, according to Wikipedia – but they can give qualitatively different results even for very small systems. 

One could also try to find the human enterprise equivalent of neural correlates of consciousness (NCC), defined by Christof Koch as “the minimal set of neuronal events and mechanisms jointly sufficient for a specific conscious percept” such as synchronized action potentials in neocortical pyramidal neurons [8]. The problem, of course, is that a neural correlate of consciousness, even if one could identify one, presupposes being able to identify consciousness; and NCC, like all these models, has its problems even in humans. (IIT and NCC are linked; Koch has collaborated with Tononi in developing IIT, and the two of them wrote a 2011 review article on NCC together.)

Signs of sciousness

I skated by above a huge assumption above: how would one observe (or even just define) (con)sciousness in meta-human entities? Even just for humans there is little if any agreement about what consciousness it is, let alone how to explain it. This is essentially future work, but for the moment here’s an interesting approach I stumbled on recently. 

Jonathan Birch and colleagues have suggested that one considers five separate elements of conscious experience rather than thinking about “levels of consciousness” model such as that implied by IIT’s phi metric. The five are perceptual richness, evaluative richness, integration at a time, integration across time, and self-consciousness. They argue that each animal species has its own distinctive consciousness profile. A NewScientist article gives some examples: scrub jays evince a sense of time by burying food to eat later, and self- (and other-)consciousness by practicing deception; octopuses probably don’t recognize themselves, but their play suggests evaluative richness. While humans integrate two visual fields into a single conscious experience, some evidence suggests that each of an octopus’s limbs (two-thirds of its neurons are located in its arms) operates semi-autonomously. The sciousness of an organization may therefore be nothing like human consciousness, or that of any other animal.

I’m hoping that by the “If I can think of it, it’s already been done” rule there’s a rich literature about the character and perhaps consciousness of corporations as such. I haven’t found it yet, but hints and pointers include notions like corporate culture, personhood, and lifecycle; and financial and HR metrics of company “health.” For example, Molenaar et al. define corporate culture as “the beliefs, values and behaviors that are consistent among all members of the corporation”; what if the corporation itself is then deemed to have those beliefs and that it demonstrates a consistent set of behaviors? (There are, of course, more definitions of corporate culture than one can shake a stick at; see e.g. [9].)

Updates

27 Oct 2021: For more on this topic, see my new post O-gregores: Perhaps the aliens are already here

16 Oct 2021

  • Added ref. [10]
  • Pandey & Gupta (2008) discuss the collective consciousness of business organizations, but neither this paper nor some of its key references, e.g. Gustavsson & Harung (1994), are well-cited. [11] They mention Durkheim's notion of collective consciousness, defined by Durkheim (1893) according to ThoughtCo as "the totality of beliefs and sentiments common to the average members of a society," which may be worth following up; this sounds like a social imaginary, although he's quoted as saying it "has a life of its own," and the ThoughtCo author contends it exists independently of individual people. Similarly, Pandey & Gupta conceive of collective consciousness as "an experience based  transcendent structure, shared by groups of people"; again, the focus on what is experienced by individuals, rather than the consciousness of the collective as such. See also [12].

References

[1] Simard, S. W. (2018). Mycorrhizal Networks Facilitate Tree Communication, Learning, and Memory. In F. Baluska, M. Gagliano, & G. Witzany (Eds.), Memory and Learning in Plants (pp. 191–213). Springer International Publishing. https://doi.org/10.1007/978-3-319-75596-0_10

[2] Barbey, A. K. (2018). Network Neuroscience Theory of Human Intelligence. Trends in Cognitive Sciences, 22(1), 8–20. https://doi.org/10.1016/j.tics.2017.10.001

[3] Telesford, Q. K., Simpson, S. L., Burdette, J. H., Hayasaka, S., & Laurienti, P. J. (2011). The brain as a complex system: Using network science as a tool for understanding the brain. Brain Connectivity, 1(4), 295–308. https://doi.org/10.1089/brain.2011.0055

[4] There’s at least one important caveat to the claim that neurons don’t exist in many brains: Edwin Hutchins’ notion of distributed cognition maintains that “cognition involves not only the brain but also external artifacts, work teams made up of several people, and cultural systems for interpreting reality (mythical, scientific, or otherwise).” In this approach, cognition is distributed among individuals and their artefacts. It does not AFAIK say anything about consciousness. 

[5] Stavish, M. (2018). Egregores: The Occult Entities That Watch Over Human Destiny. ‎ Inner Traditions.

[6] Kaku, M. (2014). The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind. Anchor.

[7] Here is Kate Douglas’s summary of five influential models of consciousness in NewScientist:

Global neuronal workspace

Information entering the brain from the outside world or the body competes for attention in the cortex and a structure in the centre of the brain called the thalamus. If the signal it generates is stronger than signals from other information, it is broadcast across the brain into the global workspace. Only then do you consciously register it.

Attention schema

The brain evolved to contain a model of how it represents itself. This attention schema is like a self-reflecting mirror. It is what creates the subjective feeling of consciousness. There is no “ghost in the machine”; consciousness is just a mirage created by sophisticated neural processing.

Predictive processing

The brain is a prediction machine, meaning that what we perceive is the brain’s best guesses about the causes of its sensory input. As a result, much of conscious experience and selfhood is based on what we expect, not what is there.

Integrated information

Consciousness isn’t confined to brains. It arises in any system as a result of the way information moves between its subsystems. The degree of integration of this information is measured with a metric called phi. Any system with a phi of more than zero is conscious.

Orchestrated objective reduction

Quantum mechanics can explain consciousness. Microscopic structural elements within the brain, called microtubules, can exist as a superposition of all possible states. This quantum system collapses into a single state when the mass of the microtubules in it exceeds a certain threshold. The collapse is what creates consciousness.

[8] Koch, C. (2004). The Quest for Consciousness: A Neurobiological Approach. Roberts and Company.

[9] Daugherty, L. (2007). Defining Corporate Culture: How Social Scientists Define Culture, Values and Tradeoffs among Them. RAND Working Paper, WR-499-ICJ. https://www.rand.org/content/dam/rand/pubs/working_papers/2007/RAND_WR499.pdf

[10] Ginsburg, S., & Jablonka, E. (2021). Sentience in Plants: A Green Red Herring? Journal of Consciousness Studies, 28(1–2), 17–33.  https://www.ingentaconnect.com/contentone/imp/jcs/2021/00000028/f0020001/art00002 

[11] Pandey, A., & Gupta, R. K. (2008). A Perspective of Collective Consciousness of Business Organizations. Journal of Business Ethics, 80(4), 889–898. https://doi.org/10.1007/s10551-007-9475-4

[12] Mathiesen, K. (2005). Collective Consciousness. In Phenomenology and Philosophy of Mind. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199272457.003.0012




No comments: