Blade Runner and Artificial Intelligence: Examining the Ethical Status of Thinking Machines

I actually wrote a scholarly paper on Blade Runner. Mull that over in your head for twenty seconds, and thank whatever force or motivation led you away from Philosophy. This was for the legendary Peter Caws’ Philosophy and Film course. I’d like to think he saw my tongue firmly planted in my cheek, here, but if you never take it out, something something, people will think you always look that way “don’t make faces, it’ll get stuck.”

Or something. Anyway, here is:

Intent: As presented in Blade Runner, the artificial intelligences are autonomous, have the capability for rationality, and demonstrate empathy for others of their kind. The characters of Roy, Pris, and Leon almost form a warped image of the traditional nuclear family, in fact. The actions of the “replicants,” as well as the existence of the memory-implanted replicant Rachel, pose a number of questions as to the status of these creatures. Therefore, the object of this paper will be to analyze and dissect the conceptual existence of one of these “replicants” in terms of the ethical and metaphysical ramifications. In essence, this paper will answer one main question: what is the ethical status of an artificially intelligent replicant?

Presuppositions/Assumptions: This paper will need to assume few items in order to be properly argued. Firstly, this paper assumes that human society has reached a point in technological development where artificial intelligences are greater than or equal to humans in terms of their processing capabilities. In Blade Runner, it is theorized that this period in time occurs in 2019, whereas Ray Kurzweil postulates that it will occur roughly around the late 2030’s (Kurzweil 324). For the purposes of this paper, the timing is unimportant, as the conditions of the technological advancement are the bedrock for argument, in this case. 

As well, this paper shall assume that, as near as science can tell, functionalism is the most accurate and up-to-date theory of the mind and brain. Though the specific reasoning behind the usage of functionalism in the context of thinking machines will be delineated later, it is essential to the argument as the theory puts organic and inorganic thinking subjects on equal cognitive footing–there is no flesh-based bias. 

Finally, this paper shall assume that, considering the potential real-world implications of the theories of Kurzweil and Moravec, “replicants” will be able to pass a real-life version of the Voight-Kampf test, or its philosophical analogue, the Turing Test. In other words, these artificial intelligences are completely indistinguishable from humans, save for the materials that comprise them.

In addition to the presuppositions, this paper shall, for the duration, refer to these “replicants” as artificial intelligences. This will be done both to defray definitional confusion, and to allow for the word “replicant” to be used to discuss the characters and technology in Blade Runnerspecifically and independently of the futurist predictions of Kurzweil et. al..

Argument: In order to begin arguing any point regarding ethical or metaphysical considerations of artificial intelligences, the ontology of consciousness must first be ascertained. It is a function of both folk psychology and human arrogance that we hold it to be a common-sense truth that human consciousness is unique and special, but scientific fact–as near as it can convey truth up to this point–holds that consciousness is simply a complex schema of interacting mental states. This view, called functionalism, is the basis upon which any argument of the status of artificial intelligences must be made (Putnam 73). 

Popularized and mainly codified by mind/brain theorist Hilary Putnam, functionalism is a theory of the mind that concludes that consciousness is defined by mental states (an umbrella term for beliefs, drives, emotions, and reactions to stimuli) are created by their roles, and are therefore, causal relations to other mental states (Putnam 73). This philosophical conception of consciousness not only reflects the best possible philosophical analysis of current neuroscience, but also opens the door for artificially-forged intellects to have consciousness. 

Putnam’s variant of functionalism, commonly called Turing Machine Functionalism, is quite literally, based in the postulations of the first man to ponder on the existence of thinking machines. In Alan Turing’s seminal paper, “Computing Machinery and Intelligence,” Turing postulates that a reduction of an interaction between an interrogator and a machine can be used to determine whether or not machines think (Turing 90).  Going through a logical progression, Turing forces the conclusion that his test, the eponymous “Turing Test,” would be sufficient, as the converse is a position of solipsism (Turing 94). 

In terms of applying this to functionalism, Putnam takes the logical preconditions of Turing’s test and applies them to the functionalist postulation that consciousness is a function of corresponding mental states. In an ingenious synthesis of Turing’s theories, Putnam’s mental states are able to be defined by the rules in the Turing Test transition table (Putnam 77). If mental states are to be defined and categorized through Turing-style testing, Putnam concludes that an artificial construct, one of Turing’s thinking machines, would be able to possess mental states, and thus, consciousness.

So, if we are working under the presupposition that functionalism is the mind/brain theory of the most scientific and philosophical “correctness” of its time (as was stated in the presuppositions section), then a few logical statements can be made. The logical extension of Turing Machine Functionalism is that the states that make up human consciousness can be replicated in other mediums–be they artificial or organic (Putnam 78). Further, this conclusion leads to another–the apex of functionalist theory: that a complete human brain, as well as its accompanying perception of its own existence, could be fabricated. 

Now that the logical basis for argument, that an artificial construct could be functionally equal to or greater than a human in terms of its consciousness, the other two questions of importance can be addressed. The first question of note regards our relationships with the constructs: what is the ethical status of the artificial intelligences?this question, unfortunately, is contingent upon the ontology of subjective consciousnesses. A super-intelligent computer may be capable of rational thought, but should it be grouped in with the artificial intellects?

In this case, in order to answer the question regarding their ethical status, we must first ascertain their metaphysical status: specifically, whether or not we can group these creations of man in the same category as their creators. 

This question first requires that we define what “human” is, in this context. There are a number of arguments that have been made as to the nature of man in the history of the world, from Plato to Freud, but none have ever managed to stay completely true for the duration of their existence. In this case, the most philosophically correct definition of man’s nature comes from Jean Jacques Rousseau in Emile

In the text, Rousseau purports to define the best way to raise a child, however in the process, he states that “We do not know what our nature permits us to be” (Rousseau Book I). In this, Rousseau presents the modernist view of the definition of humanity as a malleable social construct. By Rousseau’s estimation, the conception of what “human” is did not exist to the far back stretches of our shared genetic history, and certainly has shifted from society to society. Therefore, Rousseau concludes that an overarching, absolute definition of human is philosophically moot. Humanity, then, is whatever the society deems it to be at that point in time. Though seemingly relativistic, this view has never seen fit to include pigs, rocks, trees, or other unconscious “subjects,” so it is still an acceptable definition of humanity to measure the artificial intelligences against. 

This position, that humanity has no real inherent nature, would be an integral portion in determining the “human-ness” of the artificial intelligences, as well as Blade Runner’s replicants. Eldon Tyrell, in a moment of self-satisfied gloating, reveals to Deckard that “[c]ommerce is our goal here at Tyrell. ‘More human than human’ is our motto” (Scott). This statement, while syntactically moot, is actually a fascinating look in to Tyrell’s intent in the functional capabilities of replicants. Tyrell wants them to be completely indistinguishable from humans, and without the aid of the Voight-Kampf test, they would be. He even goes so far as to implant memories in to the Rachel-model replicant, causing her to believe (contrary to Asimov’s laws) that she was an autonomous, flesh and blood human. 

In fact, Deckard required an upgraded and more complex variant of the Voight-Kampf test when confronted with the aberration that Rachel presented. Probing Rachel with the following query: “One more question. You’re watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog,” Deckard is only able to pick her apart when she displays more empathy for the oysters than the dog (Scott). This, according to Deckard, proves that she was faking her emotional response. At first glance, this is a simple case of a more complex replicant requiring a more complex test, but the criterion on which the test hinges, that she showed more empathy for one creature as opposed to another, forces a philosophically troubling conclusion: the only distinction between humans and replicants, in a functional sense, is a difference in empathy for non-rational organisms. If this is the only true difference, then why even make the distinction in the first place?

Rachel, as opposed to the trifecta who invaded Earth, is the replicant that raises the most intriguing questions about the nature of a human. She wholly and fully believes herself to be a human, even to the point that she is shocked by the revelation of her true identity. At the point at which we, as humans, have created an artificial intelligence that can pass as human and believes itself to be such, is it even philosophically important to distinguish between the two? In this case, it is a fools errand to find a meaningful distinction between the two, so while we cannot distinguish them based on cognitive capability, we can in terms of their organic nature–a distinction that matters little when dealing with autonomous conscious subjects. 

When translated to the artificial intelligences, in general, this particular quandary becomes even more convoluted, as the computer-minds could be either completely fabricated or, as postulated by Hans Moravec, complete artificial reconstructions of a human consciousness–a posthuman being (Moravec 13). In Moravec’s predictions, he holds that a brain, once neuroscience has progressed to the point where mapping every impulse and mental state in the brain is possible, would be able to be scanned using its chemical and electrical impulses (Moravec 13). This scan, would then be placed in to some degree of hard drive. Consciousness is continued, and the autonomous subject truly only experiences a shift in perspective and a change in form (Moravec 13). In the case of an inorganic continuation of a past human consciousness, how can we fairly say that this being doesn’t warrant definitional status as a human? This is why, simply and flatly, an all-encompassing definition of humanity cannot be applied–artificial intelligence in the real world, and replicants in Blade Runner present too many meaningful and considerable aberrations to an absolute definition of man’s nature. 

Now that the conventional definition of “human,” as well as the definition’s effect on the status of artificial intellects, has been called a wash, the only question that remains is the one of ethical consideration. As asked earlier, the ethical question regards our treatment of them vis a vis our own ethical schemas. If the replicants/artificial intelligences are to be viewed as an emergent and divergent social class, one that suddenly appears and begins to clamor for ethical consideration within the society, then we can accurately assess the course of action that society must take in terms of regarding artificial intelligences as moral agents. To do this, we must compare and contrast the preconditions for ethical interaction between moral agents currently, and then apply the ontological conclusions that have been made regarding the artificial intelligences to a hypothetical ethical exchange between a conventional human and an artificial intelligence. 

The first step in this process is determining the relative rationality of artificial intelligences. This must be done because most ethical standards begin with a presupposition of rational agents interacting. In this case, we can clearly see that any of the proposed artificial intelligences meet the base criterion of rationality. They are, by definition, rational–even coldly so. It is tautological, therefore, to refer to the rational status of thinking machines.

The next benchmark that must be met in an ethical interaction is the potential for deprivation of resources or life. Since the advent of the first codified social contract in Hobbes’ Leviathan, all ethical interactions in a civilized society revolve around resources and life, and the protection thereof on a personal level. As stated in the Leviathan: “So that in the nature of man, we find three principal causes of quarrel. First, competition; secondly, diffidence; thirdly, glory” (Hobbes 37). Translating this idea to the current argument, Hobbes’ statement holds that ethical interactions are initiated in order to curb the three traits: competition, diffidence, and glory. The three traits, according to Hobbes’ social theory, arise when there is scarcity of resources and a requirement that personal property and autonomy are upheld (Hobbes 42). Therefore, in order to qualify for an ethical interaction, the artificial consciousnesses must display a propensity for loss or for wanting. 

In Blade Runner, this is clearly laid out by the odyssey that Roy leads his non-traditional family on. Roy’s journey is for an extended life, and the very fact that he seeks more of his life, such as it is, displays his ability to fear and want for his own existence. In the simplest terms, the organism that feels the pain and fear of its own end is one meriting ethical consideration, and in this case, Roy clearly displays such mental states. In Rutger Hauer’s ad-libbed final monologue, he lays bare the reasoning behind a machine’s craving for guaranteed life: “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I’ve watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain.

Time to die” (Scott). 

Counter-arguments: On the other hand, the artificial intelligences proposed by the future of Moravec and Kurzweil are slightly more problematic. One of the postulations that Kurzweil makes regards the potential “immortality” of human consciousness–that the thinking subject, regardless of its own corporeal form, will be able to persist through technological advancement. The supposed immortality of AIs throws a wrench in the idea that ethical consideration requires the propensity for loss. A perfectly independent, self-sustaining, artificial consciousness or continued human consciousness would not be subject to mortality, and mortality is the basis of ethical consideration. The machines, in this case, would have to operate on a completely different ethical schema, a meta-ethics, even. The machines, being of such status, would probably not view the humans as equals, and therefore, knock down another precondition for ethical interaction. 

Indeed, the thinking machines might be an aberration that traditional ethics simply cannot deal with. If the basis for ethics is interaction, and the machines view humanity as simply unworthy of said interaction, then ethics, as a concept, would completely disintegrate. 

Conclusion: In the end, it is folly to attempt to guess at the potential actions of hypothetical machines in the future. Though there is a very real chance that the machines will refuse to regard mankind with the level of ethical regard that we regard one-another with, it is more rational, at this juncture, to assume that artificial intelligences, replicant or no, will be programmed with the capability for empathy, and that the artificial continuations of human consciousness–the posthumans–will possess, if nothing else, a nostalgic regard for their body-bound compatriots. At the point at which the machines are imbued with a regard for life (even if they have no need to regard their own), they are worthy of ethical interaction on the level that humans enjoy. There is no ethically correct reason, if functionalism is accepted and the artificial intellects have been proven to be ontologically equal to their organic counterparts, to deprive them of consideration. In terms of the application to Blade Runner, this would mean that Roy is perfectly justified in seeking an extension to his pale imitation of a human life. In the end, it is the minor character of Gaff, played by a subdued Edward James Olmos, that asks the pertinent question to conclude such a discussion. When looking over the situation that Deckard had gotten himself in to with Rachel, Gaff remarks that “It’s too bad she won’t live. But then again, who does?” What is remarkable is just how prescient a throwaway line can be in the context of this issue.