An Asteroid, A Date in the far Future.
Prisoner James A. Corry has been left stranded on an asteroid for four years as punishment for being convicted of murder. Other than James, there is no life on the asteroid. He is given a robot that possesses cognition, rationality, and emotion to assist him. His reality is comprised only of the inescapable asteroid prison, the supply ships (his only form of human interaction and sustenance), and the 46 years remaining in his sentence. James is firmly convinced of his innocence, and thus, in his mind, he is being held unjustly and inhumanely.
Soon, in his isolation, James is not even sure whether or how he can know his own existence. He asks himself, "Because what is there left that I can believe in? The desert and the wind? The silence? Or myself? Can I believe in myself anymore?" The first of these James answers affirmatively as there is no experience of anything else. But he begins to wonder whether his own reality is just an illusion. Like Descartes in the beginning of his Meditations, James is truly alone.
James soon receives a present: a robot who looks, sounds, and acts like a real woman. In the words of the owner's manual:
You are now the proud possessor of a robot built in the form of a woman. To all intent and purpose, this creature is a woman. Physiologically and psychologically, she is a human being with a set of emotions and a memory track, the ability to reason, to think and to speak. She is beyond illness and, under normal circumstances, should have a lifespan similar to that of a normal human being.
(The Manual). This robot begins an eerily non-human exchange with the reluctant James: "My name's Alicia .... My name's Alicia." At first, James decides that Alisha, despite displaying humanity, is only, in fact, mimicking it. He does not recognize the humanity of the robot and considers the robot to be a mockery of the form of the human. He asks himself:
Why would-- Why didn't they build you to look like a machine? Why didn't they build you out of metal, with bolts and wires and electrodes and things like that? Why'd they turn you into a lie-- cover you with something that looks like flesh-- give you a face? A face that if I-- if I look at long enough, makes me think-- makes me believe that-- it's a lie! ... You mock me, you know that? When you look at me, when you talk to me, I'm being mocked.
James only begins to accept that Alisha possesses some humanity when he sees she is capable of the thing that would appear most futile to all that are not human: emotion. Upon seeing the robot cry in the face of his abuses, he comes to the conclusion that this robot, whether it is or is not human, does possess at least some humanity.
Eleven months later, Alisha has integrated into what little human culture James had been able to replicate: learning to play, eat, and enjoy the simple pleasures of nature (in this case, the stars). This robot, before with merely the capacity for humanity, now embodies it as fully as is possible on an asteroid nine million miles from earth.
When Captain Allenby arrives and delivers the news of James' pardon, the news, while joyous, creates a dilemma. James must decide whether to take Alisha. He wants to, of course, but she weighs more than the fifteen pounds James is permitted to carry aboard.
James, who not only believes the robot possesses humanity, but actually is human, decides that it would be immoral to not bring it. (It does not help that he has also become emotionally attached to the robot.) Allenby, however, has not experienced the humanity of the robot, concurs with James' original position: that the robot is not human and merely imitates human qualities without any possession of them. James pleads with Alisha to speak to Allenby to convince him otherwise, but she is only able to muster the robotic "Corry.”
Unconvinced of the robot’s humanity, Allenby shoots it, revealing the wires and machine nature of the robot. With the "mask" of the robot unveiled, James realizes that "all [he is] leaving behind is loneliness.
This is “The Lonely,” an episode of The Twilight Zone, a mind-bending, sci-fi TV show that aired in the late 50s. In “The Lonely,” as in many episodes of the Twilight Zone, the main character undergoes a reality-altering twist that sets the stage for him to ask any number of philosophical questions. Specifically, James undergoes an ontological and epistemological crisis, regarding the nature of humanity because of the unjust, forced isolation in which he finds himself.
James' idea of the form of woman (and humanity as a whole) is put in crisis by this robot. The epistemology of James changes from believing that the robot possesses (some) humanity, to believing that the robot is fully human and, by the end, James seems to have come full circle, believing it to have been a bucket of bolts. His epistemology is shaken and he wonders if the robot was ever really human. We should also observe the subtler epistemological crisis: what does it mean to be human and can we even know humanity? Any answer to this question invariably leads to endless political and ethical effects. “The Lonely” sets the stage nicely for our question of what it means to be human.
An almost cop-out answer to the question of what it means to be human is that humans are merely members of the Homo Sapiens species. This answer is, despite its simplicity, very consistent. No matter your age, gender, race, you are still a member of this species, and thus are human. The dictionary would be on your side for this one. However, this answer just poses new questions in other areas.
The “humanities” are those areas of study which investigate human constructs and this is cited as what differentiates humans from animals. Examples include philosophy, history, religion, and art. If this is our difference, then what is to be of extraterrestrial life that may have their own philosophical systems? If entities not of our species possess all the aspects that divide us from the animals, ought we to consider them of equal humanity (value) as compared to us?
In the context of Latin American philosophy, this question takes on a new significance. The colonizers may have considered the natives as subhuman (because of their alienness to Christendom) and the natives considered the colonizers (at first) as gods (superhuman). This continental-sized example can serve as a model for our “alien” question.
The fact that neither the colonists nor the natives were able to immediately recognize the other's humanity should be sign enough that species is not a perfect determiner of humanity. Secondly, just as the colonists thought of the natives as subhuman, so too might aliens think of us. As inventors of that concept which we ascribe to ourselves, I believe it is safe to say that if it can be flipped against us, it is an unsuitable definition.
As such, it would seem, at the very least to be consistent, we should also credit alien life with humanity. However, biology as the basis of humanity conflicts with this. While you could still make the argument that being human is simply a biological state, you would still have to attribute a certain value to sentient alien life, which, at its essence, seems no different to me than humanity. At the very least, it makes humanity a disingenous word that should be replaced with a word that can be applied regardless of species. It would seem to be that we are drawing a line arbitrarily.
However, even if you accept that possessing “humanities” makes people human, you must still question of what is a humanity, and are there humanities we have not yet invented or discovered? As humanities are constructs, presupposing sentient alien life, there are certainly humanities we do not know about. We must consider the reverse perspective -- do we have humanity in the eyes of these aliens?
In philosophy, it is best not to presuppose anything, but for the purpose of answering this question, I will presuppose my (and your) humanity (I will question them later). If we possess humanity, yet we do not possess all the humanities, we can assert two things: either the humanities are a symptom of what truly gives us humanity, or there is a scale of humanity weighed by which or how many humanities an entity or society has. I will ignore this option because it seems impossible to objectively value and compare the humanities (especially if we might not know all of them,) meaning that any determination is useless.
Let’s address the first one then. If there is an underlying condition to our humanity, it would be safe to assume that it is the backbone of our humanities. I propose “reason” is this backbone. It seems the most evident and Kant would agree with me because he believes that the human is "an animal endowed with the capacity of reason." Politics and Ethics are the two affects fields by the results of our inquiry. In society it seems pointless for a non-reasonable agent to be morally or legally culpable. Should reason be the definer, then humanity seems itself a scale where more reasonable people are more human. We could always posit that there is a certain level of reasonableness that makes one human and that any other reasonableness, while useful, is intangible to this question. Kant writes on this:
[The human is] markedly distinguished from all other living beings by his technical predisposition for manipulating things (mechanically joined with consciousness), by his pragmatic predisposition (to use other human beings skillfully for his purposes), and by the moral predisposition in his being (to treat himself and others according to the principle of freedom under the laws.
That, however, is a controversial place to be. Firstly, there is the obvious question of who gets to decide whether an entity meets these standards. Is it objective, or relative to a society? If the former, it seems (ironically) that our reason is incapable of making any claim to the matter. We run into the issue of time: if a person was or will be reasonable, do they have human value? This question implicates those not yet born, young children, those in comas, the elderly, and others who fall into this category. We must also apply the perspective of the inferior. Perhaps, in comparison to another entity, we are less reasonable. If this is the case, then it may be that we are not up to the standard. This argument applies ad infinitum, to all orders of beings (except infinite ones, meaning maybe an infinite being is a human being). It seems to me that this area is too murky to draw any conclusion upon which we can act.
If it is subjective, what of inter-societal relations? Referring back to the colonialization of the Americas, Sepulveda justified conquest in his assessment that the Western world is more reasonable. This presents numerous problems for this discussion. Firstly, are the Europeans more reasonable, and if they are, are they so inherently, or by virtue of development? The first one is at the very least egotistical. Both are also completely unfounded. In many ways, Western society has been behind other societies (as they have also been ahead). There is no way one can make an objective decision.
Since we cannot draw a line at which reason gives one humanity, perhaps humanity is a scale with reasonableness as the measure. It does seem “reasonable” to suggest that a dog has more humanity than a rock, but what does this say of the pre-born, children, those in comas, the mentally challenged, and the elderly? Some may find it easy to suggest that “normal” people are more human. Once again, flipping the perspective is useful. What of every man that is smarter than us? President Lincoln expresses this best.
You mean the whites are intellectually the superiors of the blacks, and, therefore have the right to enslave them? Take care again. By this rule, you are to be slave to the first man you meet, with an intellect superior to your own.
Furthermore, what does this say of artificial intelligence (such as Alisha, or perhaps greater). Should its reasonability outpace our own, we would soon find ourselves lesser in humanity than the creation of humans. While AI has some amazing accomplishments (it can already play some board games far better than any human), currently this is not an issue. However, computer technologies have made massive strides and growth seems only exponential, and thus, this is a valid consideration.
The other could-be “determiner” of humanity is emotion. This is already far more convoluted than reason (already a hard thing to pin down and wrestle with). What is emotion? Is it chemical signals sent to your brain? In order for any action to take place, emotion must work through cognition and reason (which, biologically, unlike emotion, is an act of synthesis of new ideas). If this is the case, emotion seems a sandy foundation for our answer.
I have ignored (thus far) one prominent theory: that humanity is God-given in the form of a soul. This has its questions. Firstly, can aliens have souls? In the Christian context, the jury is still out. For Christians, would aliens (and AI for that matter) be made in the image of God (though technically, God is omnipresent, meaning it is impossible not to be in his image)? Are aliens fallen as we are? Is all intelligent life doomed to depravity?
The best answer I have read (a theory provided by the Catholic Church) is that intelligent life (like us) would have transphysical souls. Father Spitzer says on this:
Before that, from 300,000 to 75,000 years ago, our ancestors were basically eating bananas and cracking rocks. But, suddenly, they became civilized. My thought is that: 70,000 years ago, our ancestors got a soul. The real Adam and Eve became ensouled,... We have a transphysical soul. It causes us to be able to do math without algorithms,” said Father Spitzer. “We have math intuitions that are free of rules. We have conceptual ideas that are not experienced in the outer world.
(National Catholic Register) This, however, just sounds like a funny way of saying monkeys became more reasonable, and a soul can be just as easily eliminated from this picture as inserted. Many religious suggest that life forming on any planet has a very low probability and that thus, it is unnecessary until actual to come to any determination on this matter. The science, disgarees. Regardless, philosophy considers all cases and must still inquire. Bertrand Lloyd’s quote best sums up this dilemma of reason and soul.
Deny reason to animals, and you must equally deny it to infants; affirm the existence of an immortal soul in your baby or yourself, and you must at least have the grace to allow something of the kind to your dog.
As my last proposition to inspect, one could assert that humanity does not exist or is simply meaningless —that it is just an adjective we give to those entities it suits us to give it to. While I am strongly against this argument, it does provoke the question of whether humanity is even the right word to use. Kant, for example, brings up autonomy, “an individual’s capacity for self-determination or self-governance.” This concept of autonomy, while not providing any intrinsic value to any being, can be used to argue for the blanks (such as the person in a coma, or sleep). As an example, the moral law can be determined via the categorical imperative, which can only be used properly if one has, or can access autonomy. However, maxims approved by the categorical imperative can apply to non-autonomous entities, which means that the sleeping person may not have any autonomy insofar as he is sleeping.
I conclude that I am simply not knowledgeable enough — and perhaps not reasonable enough— to come to an answer. I believe that reason must be a determiner of some kind, but its provisions and limitations are presently unknown to me.