SOI podcast Existential risk
Jun 15, 2023

Existential Risk with Lord Martin Rees, Dr. Paul Ingram, and Dr. Lara Mani (podcast)

Experts discuss threats to civilization and life on Earth, from nuclear weapons to natural disasters, and also share what offers a sense of hope — for the future of humanity and for the planet.

By Templeton Staff

What do experts think are the challenges predicting tipping points for environmental, biotech, and cyber/AI risks?

Journalist Richard Sergay interviews Lord Martin Rees, the UK’s Astronomer Royal and the co-founder of the Centre for Existential Risk at the University of Cambridge. Joining them in conversation are two of the Centre’s research associates, nuclear war expert Dr. Paul Ingram, and geohazards and geo-communications scholar Dr. Lara Mani. They discuss the Centre’s research around potential risks to civilization and life on Earth as we know it, from nuclear weapons to pandemics to natural disasters. And perhaps most importantly, they share what gives them a sense of hope — for the future of humanity and for the planet. 

Listen to the episode with the above player.

Key takeaways:

  • "The collective effect of humans being more numerous and more empowered by technology on the planet is what's leading to climate change, loss of biodiversity, mass extinctions, and things like that. And even natural phenomena like volcanoes and asteroid impacts, although their rate is unaffected by humanity, their consequences are greater in a densely populated world where we depend on technology," says Rees. "Cascading consequences are important. If we take the recent pandemic, it was clearly a medical problem, but had a big consequence for schools, and education, our generation of kids. So that's an example of where different segments of society, different parts of government will have to be engaged in an emergency plan."
  • Ingram feels it's difficult to predict a tipping point before one is actually happening, but what he prefers to emphasize is that the speed of technological change, particularly in the field of artificial intelligence (AI) and machine learning, is incredibly rapid and is accelerating. "What previously we thought would take 10 years appears to have taken less than two years." There are challenges in distinguishing between authentic and manipulated information that have arisen from the capacity of the large language models to produce "deepfakes," in audio, images, text or video. 
  • Rees adds to this by noting that the inner workings of machine learning models are not fully understood. These models can exhibit insights and capabilities that surpass human abilities, as demonstrated by AlphaGo defeating the world champion in Go with a move that surprised experts. "This doesn't mean that they're a super brain, because no one is saying they need to be conscious to do all this. It's just that there are many, many different ways of displaying different kinds of intelligence. An example I would give is that back in 1900, if people thought about flying, they'd be surprised if anything could fly without flapping its wings, whereas of course, planes develop differently. And similarly, there are kinds of intelligence, which we think are specific to the flesh and blood in our heads, which could be mimicked effectively by some machine, but it doesn't mean that the machine has the consciousness which a human does."
  • There is a downside to relying on machines for decision-making. Hidden bugs or errors in the programs can lead to unexpected or flawed outcomes. Rees cautions against readily delegating important decisions about human beings to AI, such as legal judgments, medical recommendations, or financial assessments. He emphasizes the need for human involvement and the ability to contest decisions. "The challenge really, is to take advantage of all these benefits but be aware that, since we can't understand what's going on inside them, we have to be cautious about handing over power to them."
  • Mani discusses the challenges of addressing existential risks and global catastrophic risks, particularly in the context of climate change and other potential threats. She sees climate change as a significant global risk with unique challenges. One major obstacle, she points out, is that many people do not feel or realize the immediate consequences of climate change, perceiving it as a distant problem that doesn't directly impact them. This lack of personal connection and immediate impact makes it difficult to garner widespread concern and action. 
  • "For me, there's a moral imperative around making sure that we even just think about existential risks," says Mani. Ingram also recognizes that "the relationship that we have to existential risks and the relationship that we have to each other is part of what defines us as human beings." Rees echoes this and says, "I think to ensure human flourishing in future centuries is a real ethical imperative, and we are putting all that at risk if we don't give more effort to try and to minimize these existential risks."
  • "It's a very complex issue to motivate people to take risks seriously and switch from that kind of extrinsic motivations to intrinsic motivations," says Mani.  She believes the first pathway towards this is building awareness, especially at a governance and policy level. Rees agrees, and notes that "we need charismatic influences," as well.
  • "I do have optimism that we can wake up before it's too late, and that we can strengthen our societies and be more caring and compassionate with each other. Because that is not only the right thing to do, it's also the most likely way in which we will escape the risks that we study," shares Ingram. Mani shares, "I'm pretty hopeful that we're on a positive track...lots to focus on around governance solutions, and making sure that this becomes the most important agenda in some of these conversations. I think we are seeing progress to that. I think the COVID pandemic and the Ukrainian war has really put some of these discussions on the table...This is already a really positive movement towards seeing the public's getting better in understanding of what these risks are. And of course, that kind of leads to this change, this kind of societal change for understanding risk."

Learn about the research project related to this episode.

Read the transcript from the interview conducted by journalist Richard Sergay, presented by podcast producer, host, and writer, Tavia Gilbert. 

Built upon the award-winning video series of the same name, Templeton World Charity Foundation’s “Stories of Impact” podcast features stories of new scientific research on human flourishing that translate discoveries into practical tools. Bringing a mix of curiosity, compassion, and creativity, journalist Richard Sergay and producer Tavia Gilbert shine a spotlight on the human impact at the heart of cutting-edge social and scientific research projects supported by TWCF.