AI20and20 Ethics
Nov 29, 2023

The Perils & Promises of AI - Religious Leaders Discuss New Tech & Human Flourishing (podcast)

Listen in for an international dialogue around emerging technology from the perspective of persons of faith.

By Templeton Staff

Religious leaders and technology experts aligned with Christian, Jewish, Muslim, and Buddhist faith traditions discuss the impact of artificial intelligence (AI) on human flourishing. 

Adding to the conversation from the previous Stories of Impact episode, these leaders share not only what concerns them about AI, but what gives them a sense of optimism and hope.

Key Takeaways

Interfaith and Interdisciplinary Work

Tibetan Buddhist teacher and meditation master Chokyi Nyima Rinpoche believes that religious and social divisions must be set aside, so that there can be an inclusive conversation around the coming impact of AI. Dr David Zvi Kalman, Scholar in Residence and Director of New Media, Shalom Hartman Institute of North America also believes there is much "interfaith work" to be done, and it's "about open-ended exploration and coalition-building around technologies of the future." The idea of keeping the "human person at the center of technology" is a focus for the Vatican. Father Philip Larrey at the Vatican explains, "we're focusing not on what divides us, but on what unites us, which is that vision of the human person that I think almost all religions agree upon."

Biases in AI and in Humans

There's an unavoidable bias "baked in" to artificial intelligence. This is one of the reasons why Rabbi Geoffrey A. Mitelman, Founding Director, Sinai and Synapses says when it comes to new technology, it's important to know "who’s at the table making the decisions." Dr. Mhammad Aurangzeb, Professor of Computer Science, University of Washington explains "questions that traditionally were left to philosophers and scholars of ethics are now becoming engineering questions. Somebody actually has to go and implement, let’s say, a certain code of ethics informed by a certain moral philosophy and engineer those questions in code." Rabbi Mitelman agrees, and says "the output is only as good as the input. And so ensuring that the input is good and that the input comes from multiple different sources is incredibly critical. Having diversity of ethnicity, diversity of gender, and diversity of religion in these conversations."

Revd. Dr. Harriet Harris, University Chaplain, University of Edinburgh says "computers' algorithms work according to all the biases of the people who have been programming the computers. And they might suffer from unconscious bias." She agrees with Rabbi Mitelman that "a very multicultural and diverse set...a United Nations, really, needs to be involved in order to try to counteract computer bias."

While he acknowledges that bias in AI could be dangerous, Dr. Ahmad shares that the way AI is being used to identify human bias can actually be beneficial and teach us a lot about ourselves. "On the positive side of things, algorithmic auditing of real-life decisions can and is already having a very positive impact on society," he says. "We are now able to capture data on a massive scale regarding human decision-making, so we can quantify biases that all of us have - I see that in healthcare every day - and because we can quantify that, we can also de-bias it. We can recognize human biases."

Big Tech and AI

Rt. Revd. Dr. Steven Croft, Bishop of Oxford sees that "social media companies amass a great deal of data about our choices and preferences and views, and artificial intelligence makes it possible to crunch that data" and is somewhat wary about AI's ability to design specifically-targeted advertisements. "It’s not all the fault of technology, but there is evidence of willful manipulation beginning to have an effect," he says. "We need to be alert as a society to these developments which are eroding individual rights and privacies which are a very precious part of our lives." Dr. Junaid Qadir, Professor of Computer Engineering, Qatar University also cautions against the misuse of this technology. He says we should be careful when "trying to create a utopia in the world by giving people what they desire. This is misguided in that it is producing things that are anti-social, and they are actually disintegrating, even the mental health of individuals."

The Dangers of Not Using AI

"The way AI is being used now in all kinds of ways - sometimes regulated, sometimes not very well regulated - seems to me to present a clear and present danger to the flourishing of human life, and to justice and fairness," says Bishop Croft. "But the third dimension which has begun to intrigue me, is the whole opportunity that the development of AI represents for human life and health and wellbeing. So the moral dilemmas of not making use of this technology have also emerged alongside the dangers that it presents." Dr. Qadir agrees and says "if we are careful and if we use technology in the right places, it can alleviate a lot of human misery, it can cure people, it can even enable larger access to knowledge. So that gives me hope, that if used wisely, technology can be a force for good."

Machine as a Co-Pilot: Technology as a Tool for Human Flourishing

For Dr. Harris, "AI becomes interesting in a hopeful way when looking at the immensity of some of the problems we're facing in the world. Huge problems of poverty and hunger have real inequalities in birth and death, in terms of societies with really depleted access to good healthcare, good water. And I think human beings have lived way beyond their means and created massive, massive problems. We also are quite inventive at finding solutions to these problems. AI is one of those routes. I do see good potential and uses for it there." Father Paolo Benanti at the Vatican puts it this way: "Every time that you use the machine as a co-pilot, as an instrument to allow the human beings in the center or to be better humans, we are using technology as a tool for human flourishing...AI can be a multiplier of human wisdom."

AI and The Future of Work

While this positive impact of automation and increased access to knowledge cannot be denied, Father Larrey cautions, "Goldman Sachs predicts in the next ten years there will be 300 million people displaced by AI. That is going to be very serious for society. We can't have that many people without jobs." Bishop Croft is also concerned about the future of work, "both with increased automation and the fact that the economic effects of increased automation are going to fall very unevenly across the population." 

AI, Human Values, and a Sense of Purpose

Rabbi Mitelman also reminds us that while "there is an element of a lot of the value that we bring to this world, in what we contribute [with work],  there is also value of just being a human being. Artificial intelligence, I think, is going to challenge the way we look at what we are able to contribute to this world." He notes that "for many people, the way that they contributed to this world 60, 70, 80 years ago, was in industry, in being able to create," and that the changes that come from the rapid onset of automation may be jarring to how we view our sense of purpose.

AI and Hope for Humanity's Flourishing Future

Dr. Kalman says there are already "religious thinkers trying to imagine what it means to develop an AI that could be compassionate." He wonders what it could mean for the future to "develop an AI that is not just executing according to some predetermined code what it ought to and ought not to do, but that has some room for notions of mercy, and notions of virtue and notions of compassion." Rinpoche explains that in Buddhism, "Harming is bad, helping is good." So, it's most beneficial if "technology is not only smart, but kind, very kind. If it is kind, I think it is worth it."  Whether those tasked with making ethical decisions about AI are Buddhist, Jewish, Muslim, Christian, Catholic or from another religion, says Father Benanti, there is more reason to have faith than fear. "As a religious man, I bet on human beings."

Listen to Part 1 of the conversation.

Watch the related video

Built upon the award-winning video series of the same name, Templeton World Charity Foundation’s “Stories of Impact” podcast features stories of new scientific research on human flourishing that translate discoveries into practical tools. Bringing a mix of curiosity, compassion, and creativity, journalist Richard Sergay and producer Tavia Gilbert shine a spotlight on the human impact at the heart of cutting-edge social and scientific research projects supported by TWCF.