Magnum Opus (Prosem Final 2/2)

This is 4 years of consideration, thought, research, and–ultimately–fear. I willingly abandon all that I say here, but do not discount any of it. (Sidenote: it’s funny to watch the techno-optimism re: The Singularity slowly erode over my college experience, until you get here, where I literally use the anthill as a metaphor for the best-case scenario for humans once an ASI is created.)

Reconciling Human Rights with Transhumans:

An assessment of the effect of transhuman organisms on the concept of human rights, and correspondingly, human beings.

 

Introduction

Human society, despite experiencing bumps along the way, has been on a consistent trajectory in providing more and more basic rights to humans on the simple basis of their humanity. Since the time of the Enlightenment, the basic value of human dignity has skyrocketed, to the point today wherein “human rights” are privileged legal, social, and philosophical concepts.

This development, whether one ascribes to a constructionist or naturalist conception of human rights, has been objectively remarkable to track along the trajectory of history. And, indeed, aping Fukuyama’s too-early pronouncement that society had reached the “end of history,” it appears as though the concept of “human rights” is reaching its own end. This is not to say that they have stopped being revolutionized or developed, but rather that society has progressed to the point where the so-called end of the project of human rights has a visible goal on the horizon. In the last 50 years alone, the status of ethnic minorities, homosexuals, and women has been categorically brought to “human” level in the western world, and the rest of the world seems destined to follow suit. The list that we conceive of as innate rights given to us by dint of humanity grows greater and greater and is applied to more and more people over the arc of history. In this way, we are reaching the “end” of human rights; we know what we are striving for now.

Of course, when reaching, or at least reaching a view of the end, something becomes apparent: the overall size of the concept of human rights is growing, and growing ever more rapidly. If we are reaching the end, and reaching it quicker and quicker, one is forced to ask “what comes next?”

The answer to that question is probably contained in the theoretical underpinnings of transhuman theory.  Though the metaphysical and epistemological underpinnings that defend these fields will be developed further later on, it is useful to give a preview, considering the seemingly outlandish quality of their conclusions.

This conclusion—stated simply for brevity’s sake—is that by at least 2045, humans will create an artificial intelligence construct that, through its achievement reflexive identity (i.e. “I” statements) and ability to self-replicate and generate unique knowledge, will attain human or near-human status. This is to say that, by our traditional conceptions of intrinsic human rights stemming from conscious agency, this transhuman will qualify for the protections and responsibilities thereof.[1] In addition, the transhuman is functionally immortal and has the ethical magnet on the compass that such a status entails—after all, all ethical interaction requires the concept of potential harm. These two existential facts, coupled with the necessity of the concept of harm for human rights to meaningfully exist, pose a jarring question to human rights: how do we account for the transhuman?

This paper shall attempt to answer that question, first describing the existential status of the transhuman. Once that is established, the transhuman and human shall be compared within the schemas of James Griffin and Charles Beitz’s human rights theories, in an attempt to see if the theoretical character of the transhuman disrupts traditional human rights theory. Finally, in the case that Griffin and Beitz’s theories cannot be salvaged, the paper shall investigate a number of alternative schemas to traditional human rights or like solutions, altogether.

The Ontological Defense of Transhumans as Near-Human

Before actually describing the theoretical scientific character of the transhuman, a complete ontology of consciousness must first be ascertained. This is due to the sheer difference in the transhuman and human, but it is also important in establishing the baseline similarities that make ethical interaction—and therefore the importance of human rights—between human and transhuman possible in the first place.

That said, it is a function of both folk psychology and human arrogance that we hold it to be a common-sense truth that human consciousness is unique and special, but scientific fact—as near as it can convey truth up to this point—holds that consciousness is simply a complex schema of interacting mental states. This view, called functionalism, is the basis upon which any argument of the status of artificial intelligences must be made.[2]

Functionalism must be utilized in this case as a simple matter of pragmatic argumentation. Without the bedrock of functionalism, the conclusion that consciousnesses can be artificially replicated or created becomes tenuous.  As well, though the apparent dominance of functionalist thought is clear in neurophysiological fields, it would be inaccurate to assert that it is taken as dogmatic fact. Other theories on consciousness hold that humans possess certain “qualia,” or subjective experience that makes complete replication impossible. Unfortunately, within the context of this paper, “absent qualia” arguments must be dismissed categorically. This is not to say that the theories lack any merit, but the baseline condition for the existence of artificial intelligence relies on functionalism; by extension, this speculative hypothesis relies on it as well. Plainly, any mind/brain theory that discounts the possibility of the existence of artificial consciousness cannot be addressed within the parameters of this argument without negating it in entirety.

Popularized and mainly codified by mind/brain theorist Hilary Putnam, functionalism is a theory of the mind that concludes that consciousness is defined by mental states (an umbrella term for beliefs, drives, emotions, and reactions to stimuli) which are equivalent to their roles, and are therefore, causal relations to other mental states.[3] In terms of creating a pertinent metaphor, by Putnam’s reckoning the mind/brain problem is more an issue of software/hardware: the mind is a program that, independently of the hardware, can be run on differing media—that is to say that a brain could be created artificially from silicon, in this case. This philosophical conception of consciousness not only reflects the best possible philosophical analysis of current neuroscience, but also opens the door for artificially-forged intellects to have consciousness.

Putnam’s variant of functionalism, commonly called Turing Machine Functionalism, is quite literally, based in the postulations of the first man to ponder on the existence of thinking machines. In Alan Turing’s seminal paper, “Computing Machinery and Intelligence,” Turing postulates that a reduction of an interaction between an interrogator and a machine can be used to determine whether or not machines think.[4]  Going through a logical progression, Turing forces the conclusion that his test, the eponymous “Turing Test,” would be sufficient, as the converse is a position of solipsism.[5]

In terms of applying this to functionalism, Putnam takes the logical preconditions of Turing’s test and applies them to the functionalist postulation that consciousness is a function of corresponding mental states. In an ingenious synthesis of Turing’s theories, Putnam’s mental states are able to be defined by the rules in the Turing Test transition table.[6] Again, referring to the metaphor of the software program, the mind is simply a series of commands to be entered in to the responding hardware. If mental states can be accurately defined and categorized through Turing-style testing, Putnam concludes that an artificial construct, one of Turing’s thinking machines, would be able to possess mental states, and thus, consciousness.

So, if we are working under the presupposition that functionalism is the mind/brain theory of the most scientific and philosophical “correctness” of its time (or at least, “correctness within the boundaries of this topic”), then a few logical statements can be made. The logical extension of Turing Machine Functionalism is that the states that make up human consciousness can be replicated in other mediums—be they artificial or organic.[7] Further, this conclusion leads to another—the apex of functionalist theory: that a complete human brain, as well as its accompanying perception of its own existence, could be fabricated.

Scientific “Realities” of the Transhuman: What is a Transhuman?

Now that the ontological justification for differing matter possessing consciousness has been established, the technological bare-bones of the transhuman must be described. The majority of scholarship on this matter has come from two men: Ray Kurzweil, a futurist in the employ of Google and Hans Moravec, a professor of robotics at the University of Chicago.

Though the two men have competing conceptions of what the transhuman will “look” like, the essential conclusion that the two men reach is that the existence of transhuman technology doesn’t just define the ethical status of transhumans, it actively changes the humans interacting with the technology.

Referring first to Kurzweil’s conceptions, the basic idea is that, aside from completely artificial transhuman “children of humanity,” humans will be able to scan their own brains in to some form of external storage. Though this very concept sounds like science fiction, recall that the underpinnings of functionalism make it theoretically possible. According to Kurzweil “Uploading a human brain means scanning all of its salient details and then re-instantiating those details into a suitably powerful computational substrate. This process would capture a person’s entire personality, memory, skills, and history.”[8]

The process by which this transfer of consciousness is achieved is described in gory detail by Moravec:

 “Your skull, but not your brain, is anesthetized. You are fully conscious. The robot surgeon opens your brain case and places a hand on the brain’s surface. This unusual hand bristles with microscopic machinery, and a cable connects it to the computer at your side. Instruments in the hand scan the first few millimeters of brain surface. These measurements, and a comprehensive understanding of human neural architecture, allow the surgeon to write a program that models the behavior of the uppermost layer of the scanned brain tissue. This program is installed in a small portion of the waiting computer and activated. Electrodes in the hand supply the simulation with the appropriate inputs from your brain, and can inject signals from the simulation. You and the surgeon compare the signals it produces with the original ones. They flash by very fast, but any discrepancies are highlighted on a display screen. The surgeon fine-tunes the simulation until the correspondence is nearly perfect. As soon as you are satisfied, the simulation output is activated. The brain layer is now impotent–it receives inputs and reacts as before, but its output is ignored. Microscopic manipulators on the hand’s surface excise this superfluous tissue and pass them to an aspirator, where they are drawn away….”

…Then, once again, you can open your eyes. Your perspective has shifted. The computer simulation has been disconnected from the cable leading to the surgeon’s hand and reconnected to a shiny new body of the style, color, and material of your choice. Your metamorphosis is complete. Your new mind has a control labeled “speed.” It had been set at 1, to keep the simulations synchronized with the old brain, but now you change it to 10,000, allowing you to communicate, react, and think ten thousand times faster.”[9]

Though this reality seems bleak and beyond the pale of science fiction nightmare, both Kurzweil and Moravec believe that there are few other options facing humanity. Speaking to the inherent capabilities of humans in conversing with the super-advanced transhumans, Kurzweil asserted that “Even among those human intelligences still using carbon-based neurons there [will be] ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants [will be] unable to meaningfully participate in dialogues with those who do.”[10] Moravec adds that “Long life loses much of its point if we are fated to spend it staring stupidly at our ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand. We want to become full, unfettered players in this new superintelligent game.”[11]

In addition to this super-intelligence being centered in the subjective single agent, the two men believe that each transhuman, owing to the functionalist theory of multiple differing and corresponding mental states, will be able to “network out” a part of their consciousness. In this state, the transhuman can be imagined as retaining a degree of individuality, but more-or-less plugging their “brain” in to a network of functionally infinite size, computing ability, and speed. It is in this state that transhumans bring about the “Singularity,” a phase of human scientific development that becomes so exponentially rapid that standard human brains can no longer comprehend it; this is the “superintelligent game” and “10,000 speed” that Moravec references regarding transhuman consciousness.

The two men seem in complete agreement with one another until the issue of the “shape” of transhumans comes up. With regard to physical form, Moravec believed that the transhuman race would be a complete paradigm shift, eschewing much of the formerly-human characteristics inherent to having a physical body. Kurzweil, on the other hand believed that there was some extrinsic importance to human form that supersedes the cold efficiency of machine life: “Even with our mostly nonbiological brains we’re likely to keep the aesthetics and emotional import of human bodies, given the influence this aesthetic has on the human brain.”[12]

Generally speaking now, a picture of the character of the theoretical transhuman has been sketched: formerly human, potentially still looking similar to one, imbued with all of the memories and emotional states that we attribute to personhood, but distinct nonetheless—immortal and incomprehensibly intelligent. What does this immortal, near omniscient god-ling do to the fabric of human rights?

Applying the Transhuman to Traditional Human Rights

Human rights have, thus far, been referenced only in the introduction. Though it seems strange to meta-textually note the discipline distribution within a given argument, this human rights-oriented argument is far heavier on the required metaphysics, ontology, and science than codified human rights theory. In fact, this argument shall use only two conceptions of human rights: the schemas of James Griffin and Charles Beitz.

Griffin’s conception of human rights is essentially derived from a conceptual grounding springing from personhood and practicalities. Personhood, to Griffin is a state characterized by “deliberating, assessing, choosing and acting to make what we see as a good life for ourselves.”[13] More directly addressed in his chapter “The Metaphysics of Human Rights,” personhood as a condition is marked by essential “human-ness”; the rest of Griffin’s metaphysical dialectic justification doesn’t follow without the basis of the existence of a single human agent.[14] Practicalities, according to Griffin are “empirical information about, as I say, human nature and human societies, prominently about the limits of human understanding and motivation.”[15] Though there are more nuances to his conception—as well as positions that he considers but elects not to pursue—that may be altered by the reality of transhumans, the basic two-prong thrust of his conception requires personhood and practicalities. The actual interaction of the transhuman and human rights theory shall be assessed after describing Beitz’s system.

Beitz, differing from Griffin’s conception, provides an account of human rights that is more focused on the function than the form. In the introduction of his book arguing for his position on the conception of human rights, The Idea of Human Rights, Beitz defines the status of human rights through their function, asserting that they serve two roles: they establish norms for state behavior, and—though admittedly tied to a specific spatio-temporal situation—provide the international community (through the UN in Beitz’s argument)[16] a method by which to measure certain acts against a lattice.  Continuing with this evolving, “constructivist” notion, Beitz asserts that the foundational documents of human rights are not in the practice of outlining rights and guaranteeing them, but rather in the process of revealing and codifying social norms at an international level.[17] In the simplest sense, Beitz’s rights are conceived of as benchmark international criticisms in public spheres. These norms essentially become evident as observable norms in the case of deviations occurring and becoming international concerns.

Another conceptual basis of human rights, the needs-based notion, deserves a small mention in this argument, as it will be addressed later on. Though the conception is not one that Griffin personally ascribes to, the transhuman paradigm shift makes the schema worthy of analysis. Within his book, Griffin attacks this basis for human rights, one whose “central notion is one of normal functioning.”[18] Griffin’s objection to this regards the “implausibly lavish” potential list of rights under this schema.

Now that Beitz and Griffin’s schemas have been elucidated, it is time to see what the existence of the transhuman does to them. For Griffin, an immediate snag is hit in his “Practicalities” section when he states that the practicalities ground “gives us a further reason to confine human rights to normal human agents, not agents generally.”[19]  To this end, it appears as though Griffin is responding directly to the theoretical character of the transhuman; at least in the case of the non-scanned, completely artificial intelligent construct, it is—and only is—an agent. In this sense, as well as taking into account that the personhood criteria defines a stakeholder group that truly needs the concept of human rights as protections, Griffin’s schema seems to leave the transhumans out—their functional immortality disqualifies them.

However, recalling another portion Griffin’s practicalities, there is perhaps some room for alteration to make room for the transhuman. When Griffin speaks to the “limits of human understanding and motivation,” he is speaking from the position of a human agent writing in the year 2007, and he acknowledges this spatio-temporal handicap. Asserting that these limits change due to the empirical information that provide their foundations, and therefore that practicalities aren’t static, it may be the case that the reality that transhumans—especially through the paradigm shift of the singularity—alter the set practicalities for a given society.

Besides Griffin’s two-pronged approach, his reproach of the needs-based notion is also altered by transhumans. Again taking in to account Griffin’s own practicalities ground, the realities of transhuman scientific (and therefore economic) development may render the objection of “implausible lavish[ness]” inert. Application to transhumans is self-evident; the transhuman’s needs are based off of a premise of functional immortality and are, therefore, minimal. Besides the transhumans, themselves, the transhuman economy may be applicable to humans as well. Griffin’s commentary on that issue is beholden to pre-singularity economics/science. It is reasonable to assert that due to the exponential development that the Singularity represents, scientific developments such as stem-cell printing for literally printable food, various DNA-based cures for disease, as well as other conclusions formerly the province of pseudoscience and science fiction actually makes a “needs-based” schema possible for human rights conceptions. Though most of the correspondence between Griffin’s schema(s) and transhumans is speculative, it appears as though there is a small amount of daylight for its continuation—at least for now.

A far less speculative take on the correspondence between a given human rights schema and transhumans is found in Beitz. The advantage of his public- determined schema is that it depicts the world up to a given point. This is to say that, according to Beitz’s conception, current international and domestic actors correspond—and sometimes frequently disagree—on fundamental issues of human rights. This is currently applicable to a situation wherein the stakeholders are radically different: such as the standards of developing nations with relatively repressive governments and the standards of developed nations with notions of baseline human rights being forced to come to consensus. As well, this schema also recognizes that human rights are inherently discursive. Because of this active, ongoing human right discourse, Beitz’s model can survive a radical change in the discussion—a change as radical as the existence of transhumans. This is because he understands that the functional use of human rights within society is that of an evolving feature—a tool that adapts to the norms of society. He includes both agreements and conflicts within societies’ uses of human rights, making his examination an ongoing dialogue—as societal attitudes and uses of rights change, so too does the definition of human rights.

Besides this very basic analysis of Beitz’s discursive schema as being compatible with the theoretical transhuman, it is worth noting that the divide between the two conceptions—Beitz and Griffin—is that of a naturalistic versus historical construction. Griffin’s personhood derivation of human rights presents a markedly more naturalistic theory than Beitz’s, and on a simple level of requiring intrinsic “human-ness” for the justification to suffice, it seems to hold that there is something special or unique to human existence—a throwback to the previously mentioned philosophical “qualia” that seeks to undermine functionalism, in general.

It is in this divide that we can see the near-naturalistic dialectical justification of human rights found in Griffin holds that counting the transhuman is a category mistake—an anomaly of reality that any universalizing human rights thesis would have to ignore in order to maintain the integrity of such a universal schema. The metaphysics that back Griffin do not agree with the metaphysics that back Kurzweil and Moravec, and by that basis alone, the historically-derived, function-opposed-to-form schema of human rights proposed by Beitz can be seen to provide the best continuation of a traditional human rights theory moving forward.

Development of an Alternative Schema

Though it is clear at this point that Beitz’s schema can survive the onslaught of the transhuman’s existence, it is a nearly empty conclusion to simply state that history will correspond itself to the technological paradigm shift of transhumans and that, as a result, human/transhuman rights will be fine—if unrecognizable to the contemporary human agent. Rather, at this point, it would be instructive to look at a few competing conceptions of a social ordering schema that could potentially replace our traditional conception of human rights altogether. That said, these social ordering schemas are so radically different and perhaps offensive to the modern human, that it is necessary to note that these alternatives are proposed in the case of a complete abrogation of traditional human rights—conceptually and practically.

Immediately moving directly toward an outwardly absurd potential schema, it is possible that the radical agency difference between humans and transhumans will create a consciousness-based caste-system. It is not completely unreasonable to believe that this could occur; recall that both Moravec and Kurzweil believe that, regardless of the physical form of transhumans, traditional flesh-and-blood humans will have difficulty—if not complete inability—to interact normally with them. From this premise, a social ordering schema akin to an anthill may be an instructive heuristic.

Though it perhaps offends the delicate sensibilities of top-of-the-food-chain humans, the extant possibility of the end of Homo sapiens dominance would logically entail a diminishment of our dominion. In this case, the anthill, with its colony above all “ethical” directive, could be humanity’s salvation. In this theoretical framework, the transhuman “overmind” created by the networking of consciousnesses would act as the hive’s queen. Much as the hive’s queen regulates movement and expansion for the amelioration of the hive through a system of pheromones and sprayed scents, the “overmind” would regulate human action for the betterment of humanity-as-organism, as opposed to action for the betterment of humanity-as-end-in-itself. It would be a completely matriarchal existence, frankly, one built off of the maternal, restrictive, and hardcore utilitarian character of a collection of super-genius software attempting to govern the totality of planet earth. Optimum functionality—as in computers, the environment, and most other complex systems—would be the goal, and human action would be prodded in that direction.

This conclusion is obviously troubling, especially to western thinkers used to being treated as unique and special since the reverse Copernican revolution of Kant, but it must be asked whether this sort of existence is preferable. First of all, in this conception of an alternative schema, it is not guaranteed that subservience is equivalent to extinction. (This is important to note, as one alternative schema actually flirts with Homo sapiens extinction.) While a citation of Schopenhauer in a paper regarding human rights and futuristic robots is seemingly aberrant, the “anthill schema” can actually be benefitted greatly by Schopenhauer’s conception of the irrational will. In The World as Will and Representation, Schopenhauer outlines human existence as being two-fold, a representation of the body as will, and the irrational will of the universe acting through this bodied manifestation of itself. This irrational will is insatiable, constantly forcing the bodied will to strive for more and more—this is to say that the irrational will causes suffering, a state characterized by the distance between the goal of the irrational will and the current progress toward it. Life, then, to Schopenhauer is suffering, and it is all due to the irrational will’s striving.[20]

If this premise can be accepted, then the direction from a being that actually does know better could perhaps be preferable. Again, while it is pure sophistry to speculate on the intentions of a being that can barely be described or conceived of by traditional human brains, it is not unreasonable to believe that a super-powered supercomputer with the self-reflective agency of a human would tend to act like a utilitarian demi-god: acting with impunity to heal a worldwide ecosystem torn and hewn by humans. The optimum functionality schema of the anthill is one alternative social ordering system that is not absurd to consider in a transhuman reality.

Of course, this is the more optimistic of the potential schemas resulting from transhuman paradigm shifts. This potential alternative, one that shall (with a degree of tongue-in-cheek) be called the “SkyNet” alternative, works from the premise that the transhuman “overmind” will decide that humanity represents a blight on the earth’s optimum functioning. As a super-powerful being would logically decide from that premise, the transhumans would begin to cull the numbers of, or potentially altogether exterminate humanity. This conclusion, while grim, is worth stating in the contexts of alternative schemas owing to the potential fact that these beings would be so advanced that they simply transcend our definitions and are, therefore, completely unintelligible. If the theorized “overmind” decides to act like a god, there’s little that traditional humanity can do to convince it that we’re worthwhile moral agents. Again, while grim, it would be a fitting end, in a way: if a super-intelligent hivemind of our own invention decided that humanity wasn’t good for earth’s optimum functioning.

 

Conclusion

In the end, it can be seen that if one ascribes to Beitz’s schema for human rights, the transhuman hypothesis does not do anything to damage it. Rather, due to its discursive nature, the existence of many super-brilliant minds acting as self-directing agents would potentially strengthen and propel whatever functional form human rights takes moving forward from the transhuman hypothesis. Griffin’s account, one that leads to a metaphysical difference of human and transhuman, is nearly incompatible with the transhuman hypothesis. That said, his addressing of the needs-based notion does raise interesting questions regarding the Singularity and the potential end of scarcity for remaining flesh-and-blood humans.

If it is the case that human rights are unsalvageable in the transhuman reality, then humans should perhaps heed the exhortations of Kurzweil and Moravec—join the Leviathan of human mind lest you be wiped out by it.

 

Works Cited

Beitz, Charles R. The Idea of Human Rights. Oxford: Oxford UP, 2009. Print.

Griffin, James. On Human Rights. Oxford: Oxford UP, 2008. Print.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. New York: Viking, 2005. Print.

Moravec, Hans P. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard UP, 1988. Print.

Moravec, Hans P. “Robotics and Artificial Intelligence,” in The World of 2044: Technological Development and the Future of Society, ed. Charles Sheffield, Marcelo Alonso, and Morton A. Kaplan (St. Paul, MN: Paragon House, 1994)

Putnam, Hilary. “The Nature of Mental States.” Philosophy of Mind: Classical and Contemporary Readings. By David John Chalmers. New York: Oxford UP, 2002. 73-79. Print.

Schopenhauer, Arthur, and E. F. J. Payne. The World as Will and Representation. New York: Dover Publications, 1966. Print.

Turing, Alan M. “Computing Machinery and Intelligence.” 2012. A Philosophical Reader. Comp. Peter Caws. 89-95. Print.