“`html
A University of Michigan anthropologist confronts artificial intelligence and the power we ascribe to it
EXPERT Q&A

As individuals adopt ChatGPT and similar expansive language models, University of Michigan anthropologist Webb Keane observes that it’s simple for individuals to attribute human or even divine authority to AI.
Keane examines the influence of religion in everyday human experiences, ethics, and morality. Additionally, he is intrigued by how individuals anthropomorphize nonliving entities, especially those that seemingly employ human-like language or communication systems. Keane elaborates on the methods through which people may start conferring moral significance to artificial intelligence—only to realize that AI often reflects the values of the individuals and organizations that created it.
Why do we place such confidence in what ChatGPT conveys to us?
The trust we bestow upon AI originates from numerous sources, but I find it particularly fascinating how it relates to the ways we have historically granted authority to nonhuman elements across various contexts. We tend to project intentions and profound thoughts onto entities that seem sentient, especially those that can communicate in a language-like manner.
We’ve historically engaged in similar behaviors with ancient Delphic oracles in Greece and ancient Chinese practices. Examples include the I Ching, and it appears that people are beginning to apply this mindset to algorithms—even small-scale tools such as a Fitbit or Spotify’s recommendation algorithms, which imply, “I understand your preferences better than you do.”
Could you discuss the evolution of moral boundaries and how these have transformed over time?
The narrative of moral history is contingent on how we delineate the lines between human and nonhuman. Who is included? To whom do my ethical standards extend? Much of the advancement in justice and rights over the last few centuries has relied on broadening the definition of those individuals we consider as having moral significance.
There was a period when women were denied the right to vote, when people were enslaved, and when it was permissible to brutally beat horses to death in city streets. Our evolving perspectives have not only been changes within legal frameworks or concepts of justice, but have also represented an expansion of the moral community.
As we widen the moral sphere, we often face confusing dilemmas, uncertain where certain matters lie along the spectrum of human and nonhuman. This ambiguity can lead to compelling moral complexities.
Where is humanity progressing with its newfound affection for AI?
We must approach this thoughtfully and wisely about what level of authority we are prepared to confer on it, which decisions we allow it to make for us, and ensure we do not prematurely grant it more credibility, trust, or faith than it warrants. AI is a product of human ingenuity, and it is our responsibility to discern its appropriate usage.
Moreover, I must emphasize that AI is also a creation of large, powerful corporations, each with its own interests. Thus, we need to be cautious about ascribing excessive power to AI while overlooking the entities behind its development.
Should we be concerned about AI?
The world appears split between AI enthusiasts and skeptics, techno utopians and techno pessimists. My primary concern lies with the utopians, as the history of technology is punctuated by numerous innovations that often carry unforeseen implications—consider the airplane and automobile, which, while beneficial, have played significant roles in climate issues; or nuclear power, which, while revolutionary, has introduced the specter of potential nuclear catastrophe.
Can you provide an example of moral decision-making in AI?
Ann Arbor, Michigan, serves as one of the testing locales for autonomous vehicles. Initially, you may think, “That seems like a neat gadget,” but as you encounter more of them, you might ponder, “Can I trust it not to hit me if I inadvertently cross its path?” The challenge isn’t merely about constructing a vehicle that avoids collisions; it involves programming a car that must make decisions that, at critical moments, become moral judgments.
Picture a self-driving car navigating a road when it suddenly encounters a mother and her young child crossing ahead. The vehicle must make an instantaneous choice: Should it collide with the mother and child, or should it steer into a tree, endangering all occupants inside?
These scenarios are inevitable. They are not merely technical issues to be solved; they are moral dilemmas. In a sense, we have delegated such decisions to machines, but this does not negate the reality that these scenarios require profound moral considerations.
You discussed the convergence of these themes in a recent publication, correct?
My latest book, “Animals, Robots, Gods,” was initially sparked by observations I made upon ChatGPT’s arrival. I quickly recognized an intriguing phenomenon: many individuals immersed in cutting-edge technologies like AI and robotics—those who pride themselves on their rationality and scientific approach—tend to utilize theological language when discussing ChatGPT’s capabilities, often describing AI in almost divine terms.
We continuously create moral counterparts or moral discussants, whether it’s venting frustration at a computer that froze unexpectedly, kicking a car that has broken down, or engaging in conversations with a pet. These may seem trivial, yet on a larger scale, there are individuals succumbing to the allure of attributing immense authority to AI—indeed, it is designed to provoke this response—and I believe this represents an extension of our tendency to anthropomorphize the inanimate world, ascribing it human-like abilities.
“`