Oct 9, 2023

Ethical Artificial Intelligence: Religious Leaders Discuss the Impact of AI on Human Flourishing (podcast)

Leaders from religious traditions across the globe describe how their faiths view technology and what AI could mean for the future of human flourishing.

By Templeton Staff

Artificial intelligence (AI) has recently and dramatically shifted from a niche technology into a phenomenon capturing broad public consciousness. With breakthrough tools like ChatGPT now accessible to virtually anyone, AI has become a subject of both fascination and concern. Its potential to impact human flourishing, for better or worse, is increasingly drawing the interest of religious leaders. For this podcast episode, nine leaders from diverse faith traditions share their insights into the societal implications of this technology, and the role of religious and moral leaders in shaping the ethical framework for AI development and use.

Listen to the episode with the above player.

Key Takeaways:

Religion, Responsibility, and the Call for AI Ethics

AI is poised to transform society in ways that could surpass even the internet. AI's supercharged data processing capabilities are already affecting various aspects of modern life. Machine learning, powered by AI, has the potential to influence education, among other domains, in profound ways. "In a religious sphere, we tend to link creativity with divine or with inspiration or with something that's really quite special, and so it's actually fascinating to see that in machines that we have made," says Revd. Dr. Harriet Harris, University Chaplain, University of Edinburgh. "I don't think we know what to make of it yet, but it raises questions that are similar to, but interestingly, different from, questions that have come up in a faith and religious arena for centuries."

Father Philip Larrey, Chair of Logic and Epistemology at Pontifical Lateran University in the Vatican also believes that religious leaders have long prepared for the kinds of concerns about morality the rise of AI provokes. For centuries, thinkers have explored the timeless question: what is right and wrong? "I think we all agree, whatever your religious affiliation, that you should do good and avoid evil."

But whose responsibility is it? And do we know enough yet about the technology to regulate it without impeding its potential to enhance human flourishing? Dr. David Zvi Kalman is Scholar In Residence and Director of New Media at the Shalom Hartman Institute of North America. He says that while global "Big Tech" is creating these incredibly powerful tools, they are simultaneously working to make sense of it based on its emergent behavior. This is a potentially dangerous unknown. "There's long been a tension among tech companies between the urge, on the one hand, to get products out as fast as possible and the need, on the other hand, to make sure that they're regulated in ways that are safe for people, safe for humanity," he notes. "There are a lot of questions right now about exactly what is going on under the hood within these technologies and their capability for destructive behavior, and the ways in which they can be manipulated. So, there is, importantly, a kind of laudable interest on the part of many of these companies to do the hard work of regulation. On the other hand, it is not clear whether they’re fully equipped to actually do that work."

 Dr. Muhammad Aurangzeb Ahmad, University of Washington Professor of Computer Science makes a similar observation. He says, "On one end of the spectrum, we are designing a calculator. On the other side of the spectrum, we have a scenario where a system which is making life-and-death decisions and everything in between."

In 2020, the renAIssance Foundation, a group of organizations led by the Vatican launched The Rome Call for AI Ethics, a multi-faith, global coalition seeking to promote a sense of responsibility around the ethical development of AI technologies. So far, the call's signatories include major tech companies, world governments, universities, and representatives from three of the world’s major religions.

Father Paolo Benanti, Professor of Ethics and Moral Theology, Pontifical Gregorian University in the Vatican, and Scientific Director at the renAIssance Foundation is an advisor to Pope Francis on artificial intelligence and technology ethics. The Vatican directly experienced AI’s impact earlier this year, when an AI-generated image of Pope Francis wearing a white, Balenciaga puffer jacket went viral. On the surface, that picture might seem more silly than dangerous, but Benanti sees it as an eye-opening illustration of the "powerful tools now in the pocket of everyone." The danger, says Benati, is that "we haven't built the culture to handle those kind of tools." He warns of the potential to weaponize AI as "sophisticated soft power instruments or propaganda instruments," and that in a worst-case scenario, AI could be "taken as a source of authority, as a new oracle, as a new religious leader, or even as the source of wisdom." If we're not mindful, he points out, major problems could arise from "the using AI for fake news or producing the fake declaration of religious leader. There are a lot of situations around the world in which religions are involved in civil war. This could be used as an instrument by someone to fuel war."

Influence, Equity, and Patience

Dr. Junaid Qadir, Professor of Computer Engineering at Qatar University, says that one of the biggest dangers of AI is its potential to influence our minds. Whether that influence is distraction, nudging behavior toward a particular outcome, or overt manipulation, it’s a problem. "Technology tends to concentrate wealth and power. So whoever has technology has a lot of influence over other people," he says. To make matters worse, "we currently do not have a perfect equality in terms of where the technology is developed, who is developing the technology, how the technology is being designed, for what purpose the technology is designed."

Rt. Revd. Dr. Steven Croft, Bishop of OxfordFounding Board Member, Centre for Data Ethics and Innovation agrees. He adds, "often the issue is not that people’s data is being taken away from them, it’s that we are being incited to give it up without knowing the full consequences of that. We’re seeing the big tech companies encroaching more and more on different aspects of human life and actually are beginning to influence and shape the very essence of what it means to be human and to act in a free manner. We need to take notice of that as a society because these technologies are so all pervasive."

Another big problem with natural language processing tools driven by AI, is that although it's a chatbot, many people use ChatGPT as if it was a search engine. In other words, they’re accepting the text ChatGPT generates as a trusted source of information, says Dr. Qadir. And, he cautions, that trust has not been earned. "It will produce a very persuasive answer but the jury is still out about how factual it is. Sometimes it has problems such as hallucination... it also has biases."

Rabbi Geoffrey A. Mitelman, Founding Director, Sinai and Synapses, elaborates on the issue of misinformation. "When there is a language hallucination that comes in from ChatGPT, that could spread like wildfire and really create a lot of challenges in the real world on the ground, not just online," he says. He notes, "In Judaism, God creates the universe using words. The words that we use have impact." There could be a lot of problems if quickly created false information perpetuates before it's corrected. "It's very hard to be able to correct something once you hear it."

"We shouldn't rush to make this artificial intelligence," says Chokyi Nyima Rinpoche, Abbot, Ka Nying Shedrub Ling Monastery, Kathmandu, Nepal. He says the biggest danger in this new AI-focused world is nihilism. "Religion is a medicine. Be kind, help others, be patient, say truth."

Listen in to hear these leaders discuss additional insights, including:

  • The "alignment problem," which questions whether AI will align with human goals or potentially pursue its own agenda.
  • The need for diverse voices at the table to navigate the profound societal changes brought by AI.
  • The importance of putting humanity at the center of technology.
  • Reasons why some are pessimistic for the short term, but hopeful for the future of AI's impact on human flourishing.

Stay tuned for Part 2 of the conversation.

Watch the related video.

Built upon the award-winning video series of the same name, Templeton World Charity Foundation’s “Stories of Impact” podcast features stories of new scientific research on human flourishing that translate discoveries into practical tools. Bringing a mix of curiosity, compassion, and creativity, journalist Richard Sergay and producer Tavia Gilbert shine a spotlight on the human impact at the heart of cutting-edge social and scientific research projects supported by TWCF.