Managing20 Extreme20 Technological20 Risk20 20 University20of20 Cambridge2008
Jul 26, 2021

Managing Extreme Technological Risk with the University of Cambridge (video)

At the University of Cambridge’s Centre for the Study of Existential Risk (CSER,) scientists are exploring biological, environmental, and AI risks, and how we can manage them.

By Templeton Staff

“This is the first century when human beings, one species, have the future of the planet in their hands. The stakes are very high. There is a new category of risks which are certainly going to be global and would involve some sort of catastrophic setback to civilization,” says Lord Martin Rees, co-founder of the Centre for the Study of Existential Risk at the University of Cambridge (CSER.) The nature and level of extreme technological risks (ETRs) are difficult to assess because they have in the past received little serious scientific attention. The Centre is collectively looking at the top risks to civilization, accessing dangers, and considering mitigation.

The most pressing threats to life on earth fall mainly into three categories; biological, environmental, and artificial intelligence risks. Catherine Rhodes, PhD of CSER warns us that even if it didn’t get to the stage of wiping out humanity, the suffering that would be caused to billions of humans in the process of one of those threats evolving would be staggering. Seán Ó hÉigeartaigh, Executive Director of CSER concurs. “We can’t fall back on a model that involves making mistakes, learning from those mistakes, and not making the same mistakes in the future. Some of the mistakes that we might potentially make will be of such consequence that we won’t really be able to recover well from them, or the cost will simply be too high.”

To better understand the threats facing our world, the Templeton World Charity Foundation supports the Centre for the Study of Existential Risk.

In studying the existential risks to our planet, researchers hope to form cooperation between the scientific, commercial, philosophical, and governmental communities with the hope that difficult questions that are sure to arise can benefit from the knowledge of a broad perspective.

Learn more about TWCF-funded research project related to this episode.

Highlights from this installment of our award-winning “Stories of Impact” video series:

  • "Now new technologies, bio, cyber, and AI are empowering us and they have huge potential for good. But also, by error or by terror, they could cause serious catastrophes so potentially catastrophic that once is too often,” says Lord Martin Rees.
  • Biological and environmental sectors operate within regulations but the area of artificial intelligence is rapidly advancing and has yet to be checked. The future of humankind and the very definition of what it means to be human depends on decisions that we make in the near future.
  • The work of CSER and others are important because big questions demand collaboration and parallel research to help guide the way through these new sets of risk to existence.

Read the transcript from the full interview conducted by journalist Richard Sergay featuring: Martin Rees, Centre for the Study of Existential Risk at Cambridge University; Seán Ó hÉigeartaigh, Executive Director of the Center for the Study of Existential Risk; Huw Price, Academic Director of The Centre for the Study of Existential Risk; Catherine Rhodes, Academic Project Manager at the Centre for the Study of Existential Risk.

Templeton World Charity Foundation’s “Stories of Impact” videos by journalist and senior media executive Richard Sergay feature human stories and critical perspectives on breakthroughs about the universe’s big questions.The inspiring narratives and observations in these award-winning videos portray the individual and societal impacts of the projects that bring to life TWCF-supported research.