AI and Ethics Stories of Impact Podcast
Discovery
Mar 27, 2024

AI & Ethics with Walter Sinnott-Armstrong, Jana Schaich Borg & Vincent Conitzer (podcast)

Researchers who've studied how to use artificial intelligence to allocate kidneys to transplant patients share what they've learned about AI and moral decisions.

By Templeton Staff

Over the last five years, a team of researchers, including a philosopher, a neuroscientist, and a computer scientist, has studied how to use AI as an assistive tool in allocating kidneys to patients needing organ transfers.

Dr. Walter Sinnott-Armstrong and Dr. Jana Schaich Borg at Duke University, and Dr. Vincent Conitzer at Carnegie Mellon University, collaborate on this research, partly funded by OpenAI, the company behind ChatGPT.

The tree are featured in this episode of the Stories of Impact podcast. Though their project is focused on integrating AI into evaluations of who gets a kidney transplant, the team is using that particular problem as the lens to explore more broadly the ethics of AI in decision-making. They’re asking whether it’s possible to imbue machines with a human value system, in what ways artificial intelligence can be employed to help humans make moral decisions, and how to ensure that when AI is involved in decision-making, the process retains humanity.

 

Listen to the episode with the above player to hear how we might incorporate human values into AI systems.

Listen to the episode with the above player to hear how we might incorporate human values into AI systems.

Key Takeaways:

When allocating these kidney transplants, artificial intelligence has two important advantages over human decision alone — AI can produce solutions from a lot of variables, and it can do so very quickly. However, says Conitzer, "if you don't want the simplistic objective, like just maximizing the number of transplants that take place, but rather want to take morality into account and think about how to prioritize, then that needs to come from the human being somehow." He describes the process as a feedback loop. "We think that the AI can make moral decisions reasonably well on its own in some sense, but it has to learn from us. And in turn, it can actually help us make better moral decisions ourselves."

Sinnott-Armstrong describes part of what the team is doing as building a "moral GPS," likening the team's use of AI to a navigation system that helps individuals make moral decisions effectively. It's important that this "GPS" be more reliable than human intuition, he notes. "Our project doesn't impose any particular definition of morality on others. Instead, what it looks at is the judgments that humans would make if they were better informed, more impartial, and didn't get confused and draw bad, irrational inferences and make all the kinds of mistakes that humans are prone to. I would say the morality that we're trying to build into computers is an idealized human morality. So, it's not that humans define what morality is; it's that humans, when they're at their best, would define what morality is.

"It's not that humans define what morality is; it's that humans, when they're at their best, would define what morality is." - Dr. Walter Sinnott-Armstrong

Schaich Borg says, it's vital to remember that "what happens with moral AI is up to humans right now. It's not up to AI." She emphasizes that "we need society to feel empowered to get involved and to share their opinions and to vote for policy-makers and policies that they think are important. It shouldn't just be people in tech or academia or in government who're making those decisions."

While AI offers potential benefits in decision-making, it also raises concerns about bias, instability, and necessitating ongoing research and caution in implementation. The team considers the potential consequences of the technology in their book Moral AI and How We Get There, and also touches upon this in the podcast episode.

Listen with the player above to learn about:

  • The limitations of current AI systems in understanding human values including variations in moral judgments across cultures and contexts.
  • The pros and cons of how this technology could be used to help loneliness through AI care bots or as someone who keeps humans company.
  • Recognizing the limitations and risks associated with fully autonomous systems.
  • Why the team has been impressed with the progress in AI research over the past 5-10 years in terms of making AI systems safer and better behaved, and why they are optimistic about the future.

Learn about the TWCF-funded research project related to this episode.

Watch the related video.

Researchers featured in the video:

Ethicist Dr. Walter Sinnott-Armstrong, Chauncy Stillman Professor of Practical Ethics and Kenan Institute for Ethics in the philosophy department. Also affiliated with the Duke Institute for Brain Science and Duke’s law school, computer science program, and psychiatry department.

Neuroscientist Dr. Jana Schaich Borg, Associate Research Professor in the Social Science Research Institute at Duke University.

Computer scientist Dr. Vincent Conitzer, Professor of Computer Science at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab. He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and a professor of computer science and philosophy at the University of Oxford.


Built upon the award-winning video series of the same name, Templeton World Charity Foundation’s “Stories of Impact” podcast features stories of new scientific research on human flourishing that translate discoveries into practical tools. Bringing a mix of curiosity, compassion, and creativity, journalist Richard Sergay and producer Tavia Gilbert shine a spotlight on the human impact at the heart of cutting-edge social and scientific research projects supported by TWCF.