Untitled Mind, Brain, AI Final

Wherein Trevor attempts to convince the reader to let an AI construct have ethical consideration. Sophomore year; reads like it. Laughs abound. 7/10.

As humanity marches headlong in to the future, sometimes problems surface that seem at first glance to be absolutely insane. An excellent example of this in contemporary computer science is the existence of artificial consciousnesses. As humans advance in neuroscience and computer science, the potential of recreating, completely from scratch, a working human consciousness is no longer the domain of science fiction. 

In 1997, chess Grand Champion Garry Kasparov was defeated by an IBM computer nicknamed “Deep Blue.” A man, Ray Kurzweil, predicted this, as well as many other relevant benchmarks on the way to creating a human consciousness (Kurzweil 123-7). Accepting at face value the veracity of his past predictions, Kurzweil postulates that “[by 2045] The non biological intelligence created in that year will be one billion times more powerful than all human intelligence today” (Kurzweil 136). This, whether believable to passersby or not, is an issue heavily warranting philosophical analysis. 

In essence, this future reality, the one proposed by Kurzweil’s mathematical prognostications, will be one wherein man and machine can, and must, meet each-other  in some degree of personal interaction. The fact that this reality is possible forces us, as a race, to ask ourselves how we shall and should treat these children of humanity. The boiled-down version of the question then, should be how shall we treat and regard these artificial consciousnesses? 

In order to begin arguing any point regarding ethical considerations of artificial intelligences, the ontology of consciousness must first be ascertained. It is a function of both folk psychology and human arrogance that we hold it to be a common-sense truth that human consciousness is unique and special, but scientific fact–as near as it can convey truth up to this point–holds that consciousness is simply a complex schema of interacting mental states. This view, called functionalism, is the basis upon which any argument of the status of artificial intelligences must be made (Putnam 73). 

Functionalism must be utilized in this case for two reasons. The first is a simple matter of pragmatic argumentation. Without the bedrock of functionalism, the conclusion  that consciousnesses can be artificially replicated or created becomes tenuous. Other theories on consciousness hold that humans possess certain “qualia,” or subjective experience that makes complete replication impossible. Plainly, any mind/brain theory that discounts the possibility of the existence of artificial consciousness cannot be addressed within the parameters of this argument without negating it in entirety. Luckily, however, the second reason for utilizing functionalism is its inherent acceptance in the scientific community. Though it is intellectually dishonest to make baseline assertions about entire schools of thought, as a demonstration, a simple internet search using the keywords of “functionalism” and “cognitive science” relates that functionalism and its tenets provide much of the scientific and experimental basis for modern advancement in the field. In a base tautological sense, functionalism is the most scientific, and, therefore when dealing with a topic of scientific-futurism, pertinent theory to apply to the thought experiment.

Popularized and mainly codified by mind/brain theorist Hilary Putnam, functionalism is a theory of the mind that concludes that consciousness is defined by mental states (an umbrella term for beliefs, drives, emotions, and reactions to stimuli)  which are equivalent to their roles, and are therefore, causal relations to other mental states (Putnam 73). In terms of creating a pertinent metaphor, by Putnam’s reckoning the mind/brain problem is more an issue of software/hardware: the mind is a program that, independently of the hardware, can be run on differing media. This philosophical conception of consciousness not only reflects the best possible philosophical analysis of current neuroscience, but also opens the door for artificially-forged intellects to have consciousness. 

Putnam’s variant of functionalism, commonly called Turing Machine Functionalism, is quite literally, based in the postulations of the first man to ponder on the existence of thinking machines. In Alan Turing’s seminal paper, “Computing Machinery and Intelligence,” Turing postulates that a reduction of an interaction between an interrogator and a machine can be used to determine whether or not machines think (Turing 90).  Going through a logical progression, Turing forces the conclusion that his test, the eponymous “Turing Test,” would be sufficient, as the converse is a position of solipsism (Turing 94). 

In terms of applying this to functionalism, Putnam takes the logical preconditions of Turing’s test and applies them to the functionalist postulation that consciousness is a function of corresponding mental states. In an ingenious synthesis of Turing’s theories, Putnam’s mental states are able to be defined by the rules in the Turing Test transition table (Putnam 77). Again, referring to the metaphor of the software program, the mind is simply a series of commands to be entered in to the responding hardware. If mental states can be accurately defined and categorized through Turing-style testing, Putnam concludes that an artificial construct, one of Turing’s thinking machines, would be able to possess mental states, and thus, consciousness.

So, if we are working under the presupposition that functionalism is the mind/brain theory of the most scientific and philosophical “correctness” of its time, then a few logical statements can be made. The logical extension of Turing Machine Functionalism is that the states that make up human consciousness can be replicated in other mediums–be they artificial or organic (Putnam 78). Further, this conclusion leads to another–the apex of functionalist theory: that a complete human brain, as well as its accompanying perception of its own existence, could be fabricated. 

Now that the logical basis for argument, that an artificial construct could be functionally equal to or greater than a human in terms of its consciousness, the other two questions of importance can be addressed. The first question of note regards our relationships with the constructs: what is the ethical status of the artificial intelligences?this question, unfortunately, is contingent upon the ontology of subjective consciousnesses. A super-intelligent computer may be capable of rational thought, but should it be grouped in with the artificial intellects?

In this case, in order to answer the question regarding their ethical status, we must first ascertain their metaphysical status: specifically, whether or not we can group these creations of man in the same category as their creators. 

This question first requires that we define what “human” is, in this context. There are a number of arguments that have been made as to the nature of man in the history of the world, from Plato to Freud, but none have ever managed to stay completely true for the duration of their existence. In this case, the most philosophically correct definition of man’s nature comes from Jean Jacques Rousseau in Emile

In the text, Rousseau purports to define the best way to raise a child, however in the process, he states that “We do not know what our nature permits us to be” (Rousseau Book I). In this, Rousseau presents the modernist view of the definition of humanity as a malleable social construct. By Rousseau’s estimation, the conception of what “human” is did not exist to the far back stretches of our shared genetic history, and certainly has shifted from society to society. Therefore, Rousseau concludes that an overarching, absolute definition of human is philosophically moot. Humanity, then, is whatever the society deems it to be at that point in time. Though seemingly relativistic, this view has never seen fit to include pigs, rocks, trees, or other unconscious “subjects,” so it is still an acceptable definition of humanity to measure the artificial intelligences against. 

At the point at which we, as humans, have created an artificial intelligence that can pass as human and believes itself to be such, is it even philosophically important to distinguish between the two? In this case, it is a fools errand to find a meaningful distinction between the two, so while we cannot distinguish them based on cognitive capability, we can in terms of their organic nature–a distinction that matters little when dealing with autonomous conscious subjects. 

This, of course, is dealing with the artificial consciousnesses in the abstract. When translated to the potential artificial consciousnesses in specific, this particular quandary becomes even more convoluted, as the computer-minds could be either completely fabricated or, as postulated by Hans Moravec, complete artificial reconstructions of a human consciousness–a posthuman being (Moravec 13). In Moravec’s predictions, he holds that a brain, once neuroscience has progressed to the point where mapping every impulse and mental state in the brain is possible, would be able to be scanned using its chemical and electrical impulses (Moravec 13). This scan, would then be placed in to some degree of hard drive. Consciousness is continued, and the autonomous subject truly only experiences a shift in perspective and a change in form (Moravec 13). In the case of an inorganic continuation of a past human consciousness, how can we fairly say that this being doesn’t warrant definitional status as a human? This is why, simply and flatly, an all-encompassing definition of humanity cannot be applied–artificial intelligence presents too many meaningful and considerable aberrations to an absolute definition of man’s nature. 

Now that the conventional definition of “human,” as well as the definition’s effect on the status of artificial intellects, has been called a wash, the only question that remains is the one of ethical consideration. As asked earlier, the ethical question regards our treatment of them vis a vis our own ethical schemas. If the replicants/artificial intelligences are to be viewed as an emergent and divergent social class, one that suddenly appears and begins to clamor for ethical consideration within the society, then we can accurately assess the course of action that society must take in terms of regarding artificial intelligences as moral agents. To do this, we must compare and contrast the preconditions for ethical interaction between moral agents currently, and then apply the ontological conclusions that have been made regarding the artificial intelligences to a hypothetical ethical exchange between a conventional human and an artificial intelligence. 

The first step in this process is determining the relative rationality of artificial intelligences. This must be done because most ethical standards begin with a presupposition of rational agents interacting. In this case, we can clearly see that any of the proposed artificial intelligences meet the base criterion of rationality. They are, by definition as computing machines, rational–even coldly so. It is tautological, therefore, to refer to the rational status of thinking machines.

The next benchmark that must be met in an ethical interaction is the potential for deprivation of resources or life. Since the advent of the first codified social contract in Hobbes’ Leviathan, all ethical interactions in a civilized society revolve around resources and life, and the protection thereof on a personal level. As stated in the Leviathan: “So that in the nature of man, we find three principal causes of quarrel. First, competition; secondly, diffidence; thirdly, glory” (Hobbes 37). Translating this idea to the current argument, Hobbes’ statement holds that ethical interactions are initiated in order to curb the three traits: competition, diffidence, and glory. The three traits, according to Hobbes’ social theory, arise when there is scarcity of resources and a requirement that personal property and autonomy are upheld (Hobbes 42). Therefore, in order to qualify for an ethical interaction, the artificial consciousnesses must display a propensity for loss or for wanting. 

In this regard, the artificial intelligences proposed by the future of Moravec and Kurzweil are slightly more problematic to humanity. One of the postulations that Kurzweil makes regards the potential “immortality” of human consciousness–that the thinking subject, regardless of its own corporeal form, will be able to persist through technological advancement (Kurzweil 280). The supposed immortality of AIs throws a wrench in the idea that ethical consideration requires the propensity for loss. A perfectly independent, self-sustaining, artificial consciousness or continued human consciousness would not be subject to mortality, and mortality is the basis of ethical consideration. The machines, in this case, would have to operate on a completely different ethical schema, a meta-ethics, even. The machines, being of such status, would probably not view the humans as equals, and therefore, knock down another precondition for ethical interaction. 

Indeed, the thinking machines might be an aberration that traditional ethics simply cannot deal with. If the basis for ethics is interaction, and the machines view humanity as simply unworthy of said interaction, then ethics, as a concept, would completely disintegrate. 

However, accepting this truth, that the artificial consciousnesses would prima facie view humanity with disdain, is stifling to discussion. Because there is no reason to believe that they would see fit to transcend the dominant race of Earth, it is more rational to assume that they would attempt to interact ethically with humanity. This question, one of how we deal with machines after adding the precondition that they will actually interact with us, is far easier to answer. 

Considering the potential for discrimination based on, not the contents of one’s character, but the materials of one’s brain, a modern political system can be looked to as an example for an ethical construct. In John Rawls’ modern classic “A Theory of Justice,” he provides the most compelling potential ethical application to artificial consciousnesses. In the paper, he lines out the revolutionary ideas of the original position and the veil of ignorance. The two inter-related ideas essentially boil down to this: when creating any political system, the creators must operate from an original position of absolute anonymity and standardization. The subject must then decide, free of class, race, or gender bias, how to structure the society from behind a veil of ignorance–the veil obstructing his view of others and their social standing (Rawls 11,47). In this conception of a state, the best possible distribution of liberty and equality is managed because every decision was made from the original position–a position inherently lacking in any personal, subjective bias. 

Applying this to the artificial consciousnesses, any ethical interaction in the post-human world must be made behind a similar veil of ignorance. Accepting the already established postulations that the machines have ontologically equal or greater conscious capacities, that the machines have the capacity for mental states of empathy and similar emotions, and that at least some of the machines could be post-humans who used the Moravec method to shed their mortal coil, we must now create a system that eliminates the possibility that a conscious subject, whether human, post-human, or completely artificial, would be denied ethical regard. 

Behind the veil of ignorance, an interaction between conscious subjects would be devoid of any telling clues of the identity of a given subject, as per Rawls’ thought experiment. At the point at which ontologically equal conscious subjects are meeting to decide whether or not to treat one another with the ethical regard that one treats another human with, the aforementioned lack of differences would hold that they would, for fear of depriving themselves of ethical regard, choose to extend it to all conscious subjects. In the original position, their rational, if selfish, want for universal regard would force them in to a position of ethical regard for artificial consciousnesses. Because of this conclusion, it is perfectly acceptable to assert that such hypothetical artificial moral agents would be deemed worthy of our ethical regard. 

In the end, the original question of how should we treat artificial consciousnesses is answered with another question: how would we want to be treated. In applying Rawls’ schema for creating a just government to basic ethical interaction, as well as applying this hypothetical schema to the previously ascertained conclusions regarding the nature of these hypothetical beings, the only rational and ethical position to take on the matter is one of relative liberalism. The word hypothetical is heavily emphasized in the end, here, because it is important to note that the entirety of this paper is based off of the presupposition of one man’s (albeit very accurate) system. Though this may end up not being a problem that humanity must deal with, speculation on the subject is absolutely necessary to the continued well-being of our race, and potentially, our children’s.

Works Cited

Hobbes, Thomas. Leviathan. 2012. A Philosophical Reader. Comp. Peter Caws. 36-42. Print.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. New York: Viking, 2005. Print.

Moravec, Hans P. Robot: Mere Machine to Transcendent Mind. New York: Oxford UP, 1999. Print.

Putnam, Hilary. “The Nature of Mental States.” Philosophy of Mind: Classical and Contemporary Readings. By David John Chalmers. New York: Oxford UP, 2002. 73-79. Print.

Rawls, John. A Theory of Justice. Cambridge, MA: Belknap of Harvard UP, 1971. Print. 

Rousseau, Jean-Jacques. Emile: Or, On Education. New York: Basic, 1979. Print.

Turing, Alan M. “Computing Machinery and Intelligence.” 2012. A Philosophical Reader. Comp. Peter Caws. 89-95. Print.