Is there personhood, in this new paradigm?
If we start with asking what a person is in the old paradigm, it makes this task a bit easier.
The most important thing to understand is that ‘person’ means something different from ‘human.’ ‘Human’ is a biological taxon. Humanity is defined by genetics. A corpse is human, a finger is human, a developing zygote is human. A single blood cell can have human-ness. Dogs, bacteria, and Martians, on the other hand, are comfortably and universally described as not human.
A single blood cell does not have person-ness. Nor does a severed finger. There are some who believe a zygote can be or is always a person, and some who believe it of corpses. But I don’t think there is a single person who believes a severed finger can be a person. Opinions may vary on the personhood of dogs and Martians, but that they vary at all is sufficient evidence: ‘human’ and ‘person’ are distinct concepts. There is something more than simple human genetics that acts as criteria for personhood.
Personhood means being understood to have agency, and being beholden to moral rules. People are generally expected to be protected by the law, and it’s generally considered an injustice when they’re not. It’s generally bad when a person is harmed, and less bad when non-persons are harmed. This is very consistent between belief systems. What varies between belief systems is what someone considers harm and injustice, what rules people are beholden to, as well as the criteria for being a person.
Legally and socially, personhood is an assigned category; the assignment criteria vary extensively between societies and jurisdictions. A human may be considered by local law a full person in one country and property in another. But appealing to authority on personhood is unsatisfactory and in some obvious cases extraordinarily immoral.
The three prominent views on more philosophical criteria for personhood are as follows: Cognition or rationality-based personhood, which is usually downstream of Locke; this is the dominant view in academia, if not in society, though it’s growing in both. Humanity-based personhood, in which all people are humans and sometimes all humans are people, this is found mostly in religious and anti-abortion ethics and is the conservative (and shrinking) view. And lastly, experience-based personhood, a more recent framework found in animal ethics; this is the more radical view of the three.
The conservative view is entirely arbitrary, and would have you believe a never-to-move-again human body with no brain but which still has a beating heart ought to be an object of premier moral concern equivalent to you or me, or that should aliens arrive in a spaceship we have full permission to do whatever we’d like to them and their children without moral concern. If you are comfortable believing that one’s own species are the only beings that matter in the universe, I have little more to say to you than to suggest that you pray to the god you probably believe in that, should humanity ever encounter aliens, a god, or some kind of superintelligent AI, those other beings agree with me, rather than with you.
The two views that remain both posit that the bounds of personhood ought to be determined by some characteristic of a being’s mind; they only disagree on what characteristic.
We can now return to the new paradigm.
A mind is an information system, made up of particles that by nature of their arrangement exhibit emergent properties—a mind can learn, but electrons and quarks cannot; a mind can exhibit continuity even if its substrate changes in small or large ways. Thus, the question “What level of mind-complexity entails personhood?” is perfectly compatible with that view. It is necessary, then, to describe various levels of complexity in emergent information processing under this systems-view, to which the question of personhood can be applied.
The most simple information systems are constants. They do not change. The existence of things that fall into this category may be up to debate—does the speed of light ‘exist’?
The next level up in complexity are simple, or first-level, information systems—no matter how many times it happens, an input will always result in the same output. These systems can be altered, but never alter themselves. This describes fundamental particles; a force applied to one side results in the same amount of opposite force, the same resultant motion, the same changes in energy levels. It also describes most things you would consider an object, such as tables—push a table and the table will respond the same way every time. It also includes most computer software: Siri, simple video games, and the website you’re reading this on (at time of publishing, anyway).
At the next level are evolutionary, or second-level information systems. Random (or at least first-level, non-stimulus-dependent) information activity is assessed by another first-level system and refined towards some destination as set by the evaluator system. These systems alter themselves, but cannot be said to choose; the precise path taken is random, and its general path is bounded by a simple system. But perhaps it can be said they learn. Other than biological species evolution, this also describes some simple machine-learning software models, and some parts of plant growth.
Third-level information systems are reactive, or associative systems. These operate by relating simultaneous or temporally-adjacent information inputs, such that previous inputs affect output responses to the same input in the future. This is exhibited in single-stimulus recognition patterns, in Pavlovian (ie, simultaneous-stimulus) association, and in response conditioning, in which future outputs are dependent on the consequences of previous responses to the same input. This is, to our best understanding, what describes most animals, as well as the current cutting-edge in AI models and a large portion of human cognition.
The fourth level are proactive information systems. These information systems are able to synthesize novel concepts out of accrued associative stimuli, and thus develop reactive strategies to inputs the system has not yet received. That proactive systems are capable of synthesis means they are also necessarily capable of language, of self-awareness and metacognition, of fact-checking—all staples of cognition-based personhood criteria.
This concept of proactive systems is the simplest and most comprehensive criteria for personhood that I’m aware of. Thus far, every conversation I’ve had on the subject has agreed that information systems that fit this criteria are unobjectionably people, and information systems excluded by this criteria are unobjectionably not people. It hews to reason and not to common sense, and is far more efficient at doing so than other frameworks with the same goals and results.
(However, it’s not terribly important to me where on this range you place the boundary of personhood, because ‘personhood’ is, even when applied by philosophers, just as lacking in privileged discreteness as ‘table-hood,’ and what’s actually relevant to your future decisions is what level of information system you see as morally responsible, which are morally protected, which have agency, and so forth—that we’ve bundled those together under ‘personhood’ is not a necessary choice, nor is it even how societies tend to operate. Children are seen as people in some sense—they are protected by moral and legal rules, but whether they have agency or are bound by morality and law depends on context. This kind of separation can be seen in many other groups: prisoners, disabled people, racial minorities, soldiers, corporations. I could argue that almost all injustices in these cases would be remedied by a dogmatic application of proactive-system personhood, but I will not force this on you.)
There is a very significant implication resulting from this, though: that a person and a calculator they’re using constitute a proactive system as well. Or that perhaps there can be described multiple, maybe a near-infinite number of proactive information systems, each varying in bounds by subtle degrees, within a single human brain. Or that two people, who consider themselves distinct but who are communicating, together constitute a proactive system as well.
That’s because those implications are true. There is no privileged discreteness for personhood. But you—however you define yourself—are (presumably) a proactive information system. That it may be possible for there to be others inside your brain doesn’t discount this, and that your brain is part of the cumulative information system that is the entire universe does not either. But this does not solve the ethical problem: if so many things are people, and if destroying your calculator brings about the nonexistence of a previously-describable proactive information system, is anything murder? Is everything murder? Does death exist?