The astounding growth of the Internet, computer technology, and artificial intelligence is a commonplace of our time; so is the challenge each poses to familiar ways of commerce and culture, and even to our basic understandings of humanity. Some of the farthest reaches of these developments are expressed in the "singularity" envisioned by the futurologist Raymond Kurzweil: a dazzling world in which, by the end of this century, humans will have so thoroughly merged with fog-like nano-computers that our bodies will no longer have a fixed form and we will, at long last, wield total control over—or be wholly at the mercy of?—an utterly computerized universe.
One need not go so far as Kurzweil to acknowledge that advances in artificial intelligence, and especially in robot technology, are compelling us to define more clearly than before what if anything it is that makes us human. In a powerful essay, the distinguished computer scientist David Gelernter has argued that consciousness—the mysterious result of human chemistry that we still do not understand—and the emotions it produces are indissoluble, and incapable of arising artificially or virtually.
In another recent essay, Gelernter translates this insight into the terms of Jewish moral philosophy. Sooner or later, he points out, we may be tempted to accord human-like rights to "thinking machines," just as many already now do to animals. Where to draw the line? For Gelernter, one useful basis lies in Martin Buber's distinction between "I-Thou" relationships, in which we locate ourselves in relation to one another and to God, and "I-It" relationships devoid of true consciousness and thus of moral responsibility.
Still, the moral question persists. For better or worse, we already live much of our lives in the realm of I-It, making our way through the worlds not just of technology but of rationalized institutions (such as law and bureaucracy) and of nature. Do these various life-worlds generate any moral or spiritual principles of their own?
A direction is suggested by the great Orthodox halakhist and thinker Joseph Soloveitchik (1903-1993). In his classic work, The Lonely Man of Faith, Soloveitchik notes the two versions of Creation recorded in Genesis. In one, humanity is placed at the pinnacle of creation as the climax of a great procession of divine formation. In the other, humans are but so many handfuls of dust, sustained by a breath and yearning for company. Both visions, he argues, are true: God has given and licensed our awesome creative powers, also our unalterable finitude.
In a subsequent essay, "Majesty and Humility," Soloveitchik carries the moral argument forward. Just as dynamic human initiative reflects the divine act of creativity, so a proper humility reflects the divine act of withdrawal that itself made possible the flawed human world. In this understanding, our drive for control over the natural world can proceed, complemented by the constant awareness of our limits.
The experience and meaning of being human have been transformed repeatedly over the course of history—think of the invention of writing, or of settled communities—and they may well be in the process of being transformed again. Technology, however, has repeatedly proved itself to be morally neutral, its worth determined by the uses to which it is put. It will not deliver us either from the excitements or from the limitations of being human—or, when it comes to our moral obligations, from the need to choose.
You can find this online at: http://www.jidaily.com/robot