Adam I. Gerard

Posthumanism and Transhumanism

Much has been said about the threat posed by the arrival of human gene editing. Equally as much has been said about the potential benefits such technologies may have.

Here, I'd like to briefly discuss two interrelated positions and clarify some potential areas of confusion.


Francis Fukuyama writes that Transhumanism aims for:

"Nothing less than to liberate the human race from its biological constraints."

In fact, Fukuyama calls Transhumanism the "most dangerous idea facing liberal democracy".

He later expanded and recalibrated his original (mostly Hegelian) thesis (from the End of History and the Last Man) to include likely disruptions from biotechnology and identified posthumanism as a likely future.

Andy Miah argues that Fukuyama appears to conflate these two concepts (posthumanism and transhumanism) and that Fukuyama's expanded thesis is ultimately neither of these two.

The points above appear to be mostly terminology debates about semantics so there's footwork to accomplish in clarifying the two isms!


As I have stated elsewhere (and as Miah correctly points out):

Posthumanism is typically seen as a negative thesis (to define by denying the truth of or to delimit another declarative statement) asserting that morality, ethics, and politics do not end with humans.

For example:

  1. Human-only or human-centric legal norms are not only philosophically dubious but fly in the face of modern concerns regarding animal rights.

  2. Human-only or human-centric legal norms and ethics rely on metaphysically indefensible assumptions that our deeper-kind is (a) clearly determinate and (b) exists.

  3. Human-only or human-centric legal norms and ethics ignore the eventual need to specify the legal status or rights of A.I. or other synthetic sentience.

    The assumption of some "human nature" is also criticized by Nick Bostrom.

Here, the "post" in "posthumanism" refers to moving beyond or past "humanism" which emerged in the secular intellectual climate of the Renaissance. This way of prepending "post-" to ideas has caused a lot of confusion and is increasingly annoying as an intellectual trope.

Transhumanism is typically seen as a positive thesis (to define by asserting the truth of some declarative statement) asserting that human beings are a stepping-stone, in some sense, to another species (along with the active pursuit of that activity via political and scientific means).

One can therefore maintain that they are posthumanist without being a transhumanist, etc. I would argue (anecdotally) that most people are already posthumanists though they would be wary of transhumanism.

Using the clarified terms above, I would call myself a posthumanist but not a transhumanist although I believe both posthumanism and transhumanism will prevail (mostly resulting from competition between nation-states).

Critique of Nietzsche

I rather enjoy Nietzsche's writing (and find many of his observations to be witty and insightful - cool aphorisms). However, I'm generally wary of those who espouse Nietzschean philosophy as a system of beliefs.

It's worth taking a moment to criticize the Nietzschean Ubermensch concept here (which I think is best translated as Overhuman following Graham Parkes and not as, say, Superman).

Nietzsche asserted that (and perhaps most prominently in Thus Spake Zarathustra):

  1. Human beings are not an evolutionary end (and shouldn't be treated as the ultimate or sole political or ethical end).
  2. Human beings are "superior" (in some sense) and represent a teleological progression - a step toward some superior state of being, evolution, or understanding elevated above other "lesser" animals ("worms", "apes").

We may reject the second component and assent to the first without conflict.

Modern biology suggests that a strongly undirected, chaotic, and non-teleological metaphor squares more appropriately with our current understanding of nature and organisms across time.

We know, for instance, that some species come to evolve the same traits (after losing them) over time - basically, "going back" to a previous state of evolution (so-called "repeated evolution" or "recurrent evolution"). That alone should make us doubt the veracity of the second component above.

Regarding Synthetic Sentience

In the spirit of posthumanism, I propose a multi-tier classification scheme that affords rights in proportion to the degree of sentience. Sentience is here defined as the ability and capacity to make autonomous decisions and to experience pain, sensory information, and conscious phenomenology.

  1. Zero - the current status of all known devices. Zero sentience and as such these devices are merely objects and treated as standard fare property.
  2. One - machines or synthetic intelligence that have partial human sentience. Certain legal norms prohibit certain kinds of interactions with them and hold them more accountable than status Zero devices.
  3. Two - at the level of human sentience and as such are treated as humans under the law.

Note: Asimov's Three Robot Laws (while brilliant) are not equivalent in nearly any way (and it's doubtful they are valid in any standard S4 or S5 deontic logic).

See Also

  1. Nick Bostrom comes down in favor of Transhumanism.

  2. Juan Enriquez notes that our children will likely be a different species than us.