Adam I. Gerard


Coining a Concept

A concept I coined to describe a more general but advanced kind of sentience.

  1. Sentience, not just intelligence, at a level or capacity beyond the human.
  2. Traditional conceptions of deities are sufficient but not necessary for supersentience.

In debates surrounding cognition and sentience, cognition is often assumed for most kinds of advanced, deliberate, action. I think there are two problems with this approach:

  1. Cognition itself is not particularly well-defined nor understood - is it just neuron activations (remember that neurons extend throughout the body and don't just reside in the brain)? Does cognition then require language or symbol reasoning at all?
  2. What about posthuman capabilities? Why project familiar schemas about thought and general, deliberate, abilities onto future or far more advanced denizens?

And, most debates surrounding intelligence are intertwined with many assumptions about reasoning, language, and cognition.

'Supersentience' is intended to capture a range of behavior or faculties not yet seen in currently known sentient beings.


The thesis that mental states are:

  1. Identified by what they do rather than what they are made of.
  2. Determined by the role they play or the system of which they are a part.

Functionalism here, is not to be confused with computationalism or symbolic computationalism which have been subsumed by connectionism (neural networks and multi-layer perceptrons, for example) which is nevertheless a kind of functionalism.


The thesis that the nature of objects are:

  1. To be identified with relations - understood as eliminativism about objects or the thesis that objects reduce to relations.
  2. Entities whose essence (character or nature if you prefer) is determined by and identified with the relationships within which they stand.

'Structuralism' in this sense, is not to be confused with 'social structuralism'. Here, 'structuralism' indicates 'ontological structuralism'.


The thesis that certain colonies of specific animal species:

  1. Collectively are an organism.
  2. Operate with an emergent sentience, hive-mind, or collective decision-making capacity greater than the sum of the individual, limited, sentience or decision-making animals that constitute the colony.

Observations and Thoughts

  1. Structuralism may provide a more uniform, fundamental, explanation for functionalism.
  2. Network architecture and the concept of "wrapping" help to shed some light on emergent orders of sentient capacity in nature and the world.
  3. Arguably, any sufficient network wrapping (comprising or having a constituent) a sentient network or system would itself be sentient.

Call the last idea transitivity - that constituent members of a collective sentience bestow their (potentially more limited) sentience to the collective.

The idea that humans have unwittingly been surrounded by advanced sentient beings for the entirety of their existence is a shocking one (in its own right).

Are there other, practical, considerations that we might conclude from the brief sketch above? I think yes, many:

  1. It might be worth thinking closely about machine learning and the Internet of Things - how do sentient machines influence the sentience of the network they are part of? I don't necessarily mean a TCP/IP network, rather what about machine learning architectures whereby various sentience systems are combined (and presumably able to communicate directly through some protocol using data that each may consume and use in decision-making).
  2. Sentience may be the more relevant notion after all given the difficulties in defining intelligence, IQ, and demonstrating exactly what constitutes an intelligent system and how intelligence itself arises and works.
  3. Human sentience, within the context of both augmentative cybernetic (devices or enhancements) as well as within the context of constantly connected human systems will, I think, run right up against traditional conceptions of human identity, agency, and intelligence. For example, there are some rather obvious Ship of Theseus problems that arise upon considering cybernetic replacement (of organic parts). If humans are modified to a certain degree, do they retain psychology continuity, the same agency, identity, and so on? If people become permanently interconnected (say by direct neural link), what then do conclude about these traditional sorts of questions?