In Our Image

a review of Noreen Herzfeld, In Our Image: Artificial Intelligence and the Human Spirit. Fortress Press, 2002. 135 pp. $16.00. ISBN: 0800634764

[an error occurred while processing this directive]

Dennis Weiss
York College of Pennsylvania


    In her 1997 sociological study of cyberculture and identity formation entitled Life on the Screen: Identity in the Age of the Internet, Sherry Turkle reports on her visit to MIT's Artificial Intelligence Laboratory where its Director, Rodney Brooks, has been building Cog, an "artificial two-year-old." Turkle, skeptical regarding the possibility of building an artificially intelligent mechanism capable of learning from its interaction with the environment, including its interaction with the researchers building it, describes her meeting with Cog:

    ...Cog "noticed" me soon after I entered its room. Its head turned to follow me and I was embarrassed to note that this made me happy. I found myself competing with another visitor for its attention. At one point, I felt sure that Cog's eyes had "caught" my own. My visit left me shaken—not by anything that Cog was able to accomplish but by my own reaction to "him." For years whenever I had heard Rodney Brooks speak about his robotic "creatures," I had always been careful to mentally put quotation marks around the word. But now, with Cog, I had found the quotation marks had disappeared. Despite myself and despite my continuing skepticism about this research project, I had behaved as though in the presence of another being. (266)
  1. In Life on the Screen and the earlier The Second Self: Computers and the Human Spirit, Turkle reports on the many ways in which people enter into relationships with computers and the ways in which these relationships are impacting our understanding of mind, self, and human nature. Brooks' Humanoid Robotics Group is in fact premised on this intertwining of "flesh and machine," the title of his recent book exploring the impact of robotics on human beings and how we understand ourselves. Cog, and Brooks' other creations, such as Kismet, are being created in our own image and in turn are altering that image. What are the implications of all this? What does it mean to create in one's own image? What does Cog and Kismet and other robotic and computer creations tell us about ourselves? What are the implications for our self-image in attempting to reproduce an image of ourselves in our machines?

  2. These are the issues that are central to Noreen L. Herzfeld's In Our Image: Artificial Intelligence and the Human Spirit. Beginning with the central insight that there are analogous anthropological frameworks at work in the fields of theology and artificial intelligence, Herzfeld, an Associate Professor of Computer Science at St. John's University who holds a doctorate in theology and advanced degrees in computer science and mathematics, seeks to explore what theologians can learn from an examination of work in artificial intelligence to create artificial minds. Her goal is "to see whether these understandings of the human condition, from two such disparate fields, are at all commensurable and to explore what implications our concept of being human might have, both for the project of creating and coexisting with artificially intelligent creatures and for the project of creating a Christian spirituality that is relevant in a technological age" (x). Herzfeld's text will be of interest to anyone wanting to explore in an introductory fashion the overlapping anthropological frameworks in these two disciplines and critically examine contemporary paradigms in artificial intelligence. Less fully realized are her own prescriptions for dealing with the continued presence of technology in our lives and their impact on shaping what it means to be human.

  3. Herzfeld's analysis begins with those few passages in the Hebrew Scriptures that recount God's creation of humankind in His image. While there have been various accounts of how to interpret God's image, traditionally referred to by the Latin, imago Dei, Herzfeld suggests that there have been some common points of agreement, including a recognition of the special dignity and value of human beings, an emphasis on the essential difference between humankind and animals, and the existence of a litmus test for humanity.

  4. In Our Image explores what it means to create in one's own image. The basic framework of the text is defined by an analogy Herzfeld locates in the image of God in human beings and the image of humanity in artificial intelligence. Central to both theology and artificial intelligence is the question of creating in one's own image. And this question raises the more basic issue of what it is that is being replicated in humans, what is being created in the image of God, or artificial intelligence, created in the image of human beings. It is Herzfeld's key contention that one can identify three parallel anthropological frameworks in theology and artificial intelligence for understanding the significance of creating in one's image.

  5. These parallels and the central argument of the text are spelled out in chapters two through four. In these chapters, Herzfeld considers three accounts of what it could mean to create in one's own image: a substantive account, a functional account, and a relational account. In asking, for instance, what constitutes the image of God in human beings, a substantive interpretation of imago Dei would locate the intersection between humanity and God in a property or set of properties intrinsic to each of us as individuals, with "reason" as a frequently cited property. A functional interpretation, such as Gerhard von Rad's account of God as regency, identifies the divine likeness not as a property but as a title given to humans by virtue of what we do. As Herzfeld puts it, "human beings image God when they function in God's stead, as God's representative on earth" (21). Drawing on the work of Karl Barth, Herzfeld argues that a relational account of imago Dei stresses that the image of God in humankind is manifested in human-divine and human-human relationship, arising in interaction. Each of these three interpretations of imago Dei, according to Herzfeld, locates the core of humanity in a radically different sphere—in human nature, human action, or human relationship (31). In her account of these multiple interpretations, Herzfeld is initially noncommittal to which interpretation she prefers. As she points out, "I have not attempted to establish preeminence among them nor to assess their relative merits, but merely to delineate the primary categories of interpretation among twentieth-century theologians" (32).

  6. In chapter three, Herzfeld reviews contemporary paradigms in artificial intelligence and finds in the attempt to model human intelligence three interpretations of intelligence that parallel the three interpretations of imago Dei. Early work in artificial intelligence, such as that associated with the symbolic approach to AI of Herbert Simon and Allen Newell, favored a substantive view of human intelligence, in which reason was a quality that could be isolated from the physical body and captured in an abstract, formal system (35). The contemporary weak AI approach favors a more functional view of intelligence, viewing it simply as a label that we place on certain activities. "Just as a functional interpretation of the imago Dei viewed humans as acting on God's behalf through their dominion over creation, so a functional imago hominis (image of the human) sees AI as appearing when computers engage in tasks in the real world that humans would normally do" (42). Finally, in the more recent work of Brooks, Winograd and Flores, and others, we see a view of intelligence as emerging from relation. Understanding, as Herzfeld suggests, is predicated on and productive of social ties, acquired and demonstrated through relationship (49). It is the current dominance and ascendancy of this model of intelligence in AI that partially leads Herzfeld to conclude that a relational understanding of human nature is key.
    Thus we find in the quest for AI support for the view that it is in our relationships that we find the center of the human, and, this, the image of God....We tend to identify with our minds, but we have come to recognize that those minds, visible in what we say or do, are formed in community and expressed in community. Rationality or intelligence, by itself, is not the defining characteristic of being human. It cannot, in fact, be captured as an isolated quality. We are relational beings; we give expression to our recognition of that fact in our search for AI. (51-52)
  7. Herzfeld's preference for a relational anthropology gains further support in her discussion, in chapter four, of images of artificial intelligence in science fiction film. When we look at AI in film, we see the danger of the substantive and functional interpretations of AI and support for the relational view. In movies such as 2001: A Space Odyssey and Colossus: The Forbin Project, we see images of computers reflecting the substantive and functional accounts of AI, projections of human reason acting in our stead. The results are disastrous. "We fear, not without reason, any machine that has no inherent necessity to stand in authentic relationship with the human beings it encounters" (60). We are much more comfortable with the human-like computers that enter into relationships with us, such as Robby the robot from Forbidden Planet or R2-D2 and C-3PO, the genial robotic companions in Star Wars. "Most notable about these robots is their deep affectionate bonds with each other and with the human characters" (62). From this consideration, Herzfeld concludes:

    If the stories we tell in science fiction are an accurate indication of the general public's hopes and fears for artificial intelligence, they tell us that we seek artificial intelligence for its relational potential rather than merely its rational or functional potentials. This is congruent with Barth's interpretation of humans as relational beings...We are most human when we are engaged in encounter with an other. (66-67)
  8. Having addressed the issue of what it means to create in one's own image, chapter five turns to the issue of why we wish to create AI. What are some of the interests and concerns that drive AI? By once again examining AI through the prism of interpretations of imago Dei, Herzfeld offers some interesting insights and critiques of the motivations behind AI. Herzfeld identifies three separate motivations driving AI: the desire for immortality, the desire to expand our dominion over the natural world, and the desire for a co-respondent with which to relate. Herzfeld is critical of each of these motivating factors.

  9. Herzfeld criticizes proponents of cybernetic immortality, such as Hans Moravec and Ray Kurzweil, both of whom look forward to a time when human beings can download their consciousness into cyberspace, for their inadequate conceptions of human nature and immortality. Visions of downloaded minds living an eternity in cyberspace, are dualistic and dismissive of the importance of the human body. Yet Herzfeld argues our finite bodies are an integral part of who we are. "The essential nature of the human being always contains two inseparable elements, self-transcending mind and finite creaturely being" (74). Furthermore, such visions of immortality define the everlasting as simply more time on this earth, rather than as an eternity outside of the spatiotemporal framework. Herzfeld is also critical of our growing reliance on technology and our willingness to cede our responsibility to computers. "When we remove ourselves from the loop, we become slaves to our machines, acting on their behest and not our own. Yet, as the story of the fall makes clear, we remain responsible before God for our decisions, even when they were suggested to us by another. If dominion over nature is a part of our imaging of God, we must ensure that we remain in control of and take responsibility for that dominion" (79). Finally, the desire to create something nonhuman with which we can relate, to the extent that such a desire is driven by feelings of isolation and solitude, fails to take significantly the relational nature of human beings and our relation to God, and replaces relationship with God with relationship with our own artifacts, a form of idolatry (83).

  10. The final chapter of In Our Image builds on the relational view of imago Dei to pose the question of how we ought to understand our relationship to computers. To address this issue, Herzfeld develops several implications of the relational understanding of the image of God. First there is the recognition that Being is at root relational and that relationship with God is the sine qua non of our very being (86). Secondly, if Being is at root relational, then there is no such thing as a self-sufficient individual. The autonomous self is an illusion. "...[O]ur potential for growth and wisdom, even in spiritual life, comes not from some latent power within ourselves, but from the life constituted by relationship with God and with others" (88). Finally there is a recognition of the interdependence of all living beings. There is no essential difference between humans and other creatures and no separation of humans from nature.

  11. What are the implications of a relational understanding of imago Dei for our relationship with computers? Herzfeld suggests that computers cannot enter into authentic relationship with human beings for they lack a common ground on which to meet. Herzfeld characterizes authentic encounter between humans as occurring when we "speak to that of God" in the other, which provides the "basis for a mutual self-disclosure and aid that can be understood and accepted. Without this ground, words will not reach their goal; actions, even aid, will be ultimately egoistic" (91). Herzfeld does borrow from the Rule of St. Benedict to suggest some guidelines for interacting with computers, including not subordinating human-human relationship to human-material relationships, recognizing that human-computer relationships are a poor substitute for more authentic relationships, and treating all tools and goods with respect.

  12. There are a number of fine points in Herzfeld's analysis of artificial intelligence and her emphasis on a relational anthropological framework. Readers unfamiliar with either topic will find her text to be a clear, well-written, concise introduction to these themes. Her relational ontology, defense of the centrality of relationship to human nature, and critique of the isolated, autonomous self is echoed in other contemporary approaches to understanding human nature, including Kenneth Gergen's relational social psychology, Mary Midgley's philosophical anthropology, and generally in feminist analyses of self and human nature. Nonetheless, Herzfeld's analysis raises several troubling issues.

  13. First, we must consider Herzfeld's characterization of the field of artificial intelligence. We have seen that she approaches AI through an anthropological lens and she is right to note that many facets of the field suggest insights into the question of what it means to be human and this serves to ground the analogy to theological interests in imago Dei. Less clear, though, is whether the field is sufficiently unified to be treated from this one perspective. Does the field of artificial intelligence deal with the question of what it means to be human? Herzfeld's characterization of it suggests not, as indicated by several points. First, Herzfeld is ambiguous regarding precisely the focus of her characterization of artificial intelligence. She alternates between talking about human cognition, intelligence, strong intelligence, understanding, having a mind, being a person, and the human condition, treating these distinct concepts as synonymous. Further clouding the water, she points to a distinction between strong AI and weak AI. "The goal of strong AI is to build something like ourselves, to create in our own image. Intelligence is simply a label for that in us that is essential, that which stands at our center, necessary rather than contingent" (50). Strong AI has as its goal the production of full human-like intelligence in a computer. And yet Herzfeld recognizes that many AI researchers doubt the feasibility of strong AI and have adopted weaker goals. "Weak AI is content with using the computer to model only a portion of human intelligence, to mimic a single human function or capacity" (42). Herzfeld recognizes that ultimately there is little to distinguish weak AI from normal computer applications. From these points, we might conclude that there is clearly no demarcated field of AI that studies human intelligence. These issues complicate the analogy central to the text and ultimately weaken her argument.

  14. Herzfeld's account of the relational model of artificial intelligence is also problematic and it is not clear what constitutes such a model. In her discussion of this model, she includes Brooks but also critics of AI such as Winograd and Flores, who conclude that AI is an impossible dream and that computers will never have understanding. Herzfeld introduces the relational approach to AI with a discussion of the Turing Test. She writes: "If we accept the Turing Test as the ultimate arbiter of intelligence, then we have defined intelligence relationally. Intelligence is a quality only observable in relational discourse. The Turing Test uses relationality to determine intelligence" (46). But while the Turing Test may use interaction with a computer to determine intelligence, that doesn't necessarily mean that it defines intelligence relationally. Doug Lenat's CYC project, which is attempting to build an artificially intelligent computer capable of engaging in everyday discourse, is cited by Herzfeld as one of the few ongoing projects indicative of a substantive approach to intelligence, not a relational approach. Engaging in discourse and passing the Turing Test doesn't require that one approach understanding relationally. Indeed, Turing's imitation game has been central to a number of schools of thought, including symbolic AI and the functionalists in philosophy of mind. These points suggest that a key point in Herzfeld's analogy, the relational model in AI, is poorly characterized.

  15. Finally, and more briefly, Herzfeld leaves out of her consideration of AI the currently dominant paradigms in AI, parallel distributed processing, connectionism, and neural networks, and the interest in artificial life. Brooks' own work on Cog and Kismet grows out of a shift from information processing models of AI to more biologically rich emergent and distributed processing models. It is precisely this shift from information processing to emergent AI that Turkle has found so evocative of the relationship between machines and humans. Emergent systems, Turkle argues, provide a new sustaining myth suggesting a continuity between computers and people (134). It is surely an oversight to overlook this work when considering artificial intelligence and the human spirit.

  16. These points suggest that Herzfeld has had to warp the field of artificial intelligence somewhat to get it to fit her analogical structure. While recognizing the need to analyze some aspects of AI from an anthropological perspective, Herzfeld's framework requires some amount of distortion in order for her analogy between imago Dei and imago hominis to work.

  17. A second series of problems converge on the anthropological framework central to Herzfeld's analysis, which is complicated unnecessarily by her seeming acceptance early in the text of several key points that she ultimately rejects. We have seen that in her account of Genesis and the three interpretations of imago Dei, Herzfeld accepts several starting assumptions. She suggests that imago Dei has served as "a symbol of what it is in human beings that makes us uniquely human, and as a determining factor in our creation, it defines what is essential to our nature, as opposed to what is merely contingent" (15). The imago Dei serves as a litmus test of humanity and an image that serves to distinguish humans from the rest of creation, conferring upon them a special dignity and value.

  18. There are two problems with this as a starting point for Herzfeld's text. First, it suggests that the two disciplines of theology and AI are indeed not commensurable, as most theorists in AI and the cognitive sciences would reject these assumptions. Secondly, Herzfeld herself rejects these assumptions, though only late in the book and not without failing to recognize the significance of doing so. Let me touch on each point briefly.

  19. Claims regarding what makes us uniquely human, distinguishes us from the rest of creation, or confers upon us a special dignity and value, find no support in work in AI. Indeed, the very attempt to image human nature in machines suggests that we are not in fact unique or special. While in her discussions of AI, Herzfeld refers to what makes us uniquely human and what separates us from the rest of creation and stands at the center of our being (34), even cursory examination of work in the field suggests that these presuppositions are not shared by theorists in AI, cognitive science, or philosophy of mind. Daniel Dennett, for instance, a leading contemporary figure in all three fields, has devoted his life's work to undermining precisely these claims. In suggesting therefore that the two disciplines of theology and artificial intelligence are commensurable, Herzfeld fails to give sufficient attention to their fundamentally divergent starting points.

  20. Perhaps, though, this oversight is driven by her own rejection of her starting assumptions. Little by little throughout her text, Herzfeld rejects each of the assumptions first presented as central to imago Dei. In the last chapter of In Our Image Herzfeld notes that a relational interpretation of imago Dei presupposes no essential difference between humans and other creatures (90) and is seemingly compatible with evolution's leaving "little room for an internal essence that belongs to a single species" (89). While these points seem generally acceptable, they significantly undermine Herzfeld's initial presentation of the key elements of imago Dei, the significance of which she fails to consider. The very questions that drive Herzfeld's book are ultimately rejected at the end of the book as ill-formed. Had she been more honest in initially laying her cards out on the table, the eventual direction of her analysis would have been easier to follow and more clear.

  21. Finally, we might note the irony that while Herzfeld concludes that a relational account of imago Dei and human nature is the preferred account and that artificially intelligent computers would lack the capacity to enter into authentic relationships with human beings, her conclusions are based precisely on those machines with which she suggests it would be idolatrous to enter into a relationship. It is finally only through her analysis of the theories and science fiction of artificial intelligence that Herzfeld concludes that relation is key to understanding the human condition. And yet her conclusions largely suggest that we ought to be wary of entering into relation with computers. Like any tool, we ought to treat computers with respect, but we shouldn't try to read off any features of human nature from our understanding of machines. "We must always be aware of the otherness of any artificial intelligence....It is dangerous for us to confuse or conflate humans and machines" (93). And yet Herzfeld's own argumentative strategy seems to suggest that this confusion and conflation is a sign of the times, not easily avoided. In his own discussion of Cog, Kismet and the work of Rodney Brooks, Erik Davis offers a characterization at odds with Herzfeld's. Brooks and his colleague Cynthia Breazeal are, Davis writes, building social robots engineered to enter into relationships with human beings, transforming themselves as well as us.
    Brooks and Breazeal are imagining...a feedback loop in which humans and machines constantly modify one another. As such, their work has moved beyond the challenge of simply engineering better robots—they are also engineering a new kind of social relationship. "Kismet was designed to be a human-robot system, not just a robot," explains Breazeal.
  22. For Davis and other commentators of the cyber-culture such as Turkle, emphasizing the otherness of artificial intelligence, obscures how these technological Others are having an impact on our conceptions of human nature. Herzfeld's recommendation for adapting the Rule of St. Benedict to our interactions with computers may fail to recognize the ways in which computers are different from other material objects we deal with. From the already evident impact of the Internet and Web on human relationships to the growing interest in the posthuman and our cyborg future, it is clear that the computer is no mere tool but a tool which changes the toolmaker. It is perhaps the implications of these human-machine interactions and our increasingly destabilized self-images that we ought to be concerned with as we contemplate what it means to create in our image.


Bibliography



[an error occurred while processing this directive]