Transcript of journalist and senior media executive Richard Sergay's interviews with Seán Ó hÉigeartaigh, Huw Price, Catherine Rhodes, and Martin Rees for the “Stories of Impact” series.

Watch the video version of the interview.

RS =  Richard Sergay (interviewer)
SÓH =  Seán Ó hÉigeartaigh (interviewee)
HW = Huw Price (interviewee)
CR = Catherine Rhodes (interviewee)
MR = Martin Rees (interviewee)

My name’s Seán Ó hÉigeartaigh, I’m the executive director of the Center for the Study of Existential Risk at the University of Cambridge.

RS: So, let’s begin. What is existential risk? What’s your definition? 

SÓH: The definition of existential risk that we use at the Center for the Center of Existential Risk is a risk or a threat that might wipe out humanity at some point in the future or permanently and drastically reduce our potential for future flourishing, if you will. So, an example might be an asteroid impact, like that which wiped out the dinosaur 65 million years ago. Or it could be something like the extreme potential effects of climate change or of a nuclear war that we might survive but that might leave our planet in such a state that we’d really struggle to rebuild civilization. For us, we have focused quite a lot on risks that are associated either with the development of powerful new technologies, or the effects on the planet of our activities. And the reason for that is twofold. One is that the natural risks that we faced over the last – well, over our time on the planet – most of them are quite rare and far between. So, it’s been 65 thousand centuries since we were hit by a very large meteor. It’s been 70 thousand years since there was a super volcanic eruption that came close to wiping out the human species. For us, the x factor are the things that are changing right now and that’s the increasing power that we have over our planet, our environment, and the increasing footprint that we’re placing on the planet. So, we regard the world as having changed in a fundamental way when the first successful nuclear bomb test happened and we came very close to destruction many times from that. We are now looking at the development of much more – other very powerful technologies, such as artificial intelligence, biotechnologies, other technologies both in military and civilian. And of course, we now have the ability to put more people on the planet than ever before and use our resources at a completely unsustainable rate. The other thing that we’re interested in, of course, is that technology is a double edged sword. It would be completely irresponsible only to talk about the risks that powerful technologies pose. We also have to look at what are the opportunities that these powerful technologies play, in both increasing the quality of life for everyone on our planet, but also providing us with the tools that we can use to combat other risks that we might face in the future, whether that’s the kind of technology that would allow us to deflect an asteroid, or the kinds of technologies that would allow us to create cleaner, more sustainable energy so that we can avoid the impacts of climate change in the long run. 

RS: In your mind – a fairly new center studying an issue that has existential in its title, what are the risks? How high? Do you have sleepless nights over these issues?

SÓH: I must admit that I do have sleepless nights over these issues. I think there are a number of threats that are quite salient. I believe that the threat of nuclear war has never really gone away. We’ve had any number of near misses, both over the course of the cold war and even in the current political climate, the specter of nuclear war is again coming to the floor. Some of those have been geopolitical situations where there was a standoff, but many have just been examples of near misses and due to accident the wrong training tape being loaded into a computer, flocks of geese being mistaken for incoming missiles and so forth. So, I think that’s one threat that’s not gone away. I think that climate change is a very real threat. One of the big questions scientifically is, how bad could it be? Which is both a question of how bad is it if things go as they currently are, and also how will we be able to respond to it, and will we be able to take the steps in time? So, right now there’s an expectation that we might experience two degrees or three degrees climate change, but there is the very real possibility that there are feedback loops that might lead to two degrees leading to three, to four, to five, to six, which would be much more permanently devastating for the planet. 00:05:19 Another threat is biodiversity loss, which is an area that we’ve been guided on by our advisors Partha Dasgupta, who is on our main project managing extreme technological risk, and also Bill Sutherland who’s on this project. We are wiping out species at a faster rate than we ever have before and we simply don’t know what the effects on our own well being and survival will be from that. We had a wonderful lecture from Paul Ehrlich last year, where he described the loss of species being a bit like removing rivets from a plane. You might take out one and nothing will happen. You may take out another and nothing will happen. But there’s always the risk that you take out one more and the plane crashes because a wing falls off. So, those are some examples. 00:06:07 Our increasing capability in biotechnology and in areas like artificial intelligence will I think pose some important threats in the future. But these are very dual use in that advances in biotechnology will help us cure many diseases and increase quality of life for people. But there’s always the potential that they could lead to accidental releases of organisms such as modified influenza virus, or the development of evermore powerful bioweapons, which is a concern that our advisor MR:  is particularly worried about and has been advising us. 00:06:46 Artificial intelligence is something of an x factor in that the level of capability we currently have is not of the level that it would pose a catastrophic risk to humanity. But progress is happening very quickly and the level of capability that we might have in 20, 30, 40, or 50 years could be a very different thing altogether. The very interesting thing about artificial intelligence is that it can be seen as a general purpose technology in that events in artificial intelligence will allow us to make better progress on a range of other technologies because it effectively increases our intellectual and cognitive tool kit. And that will have some consequences that might be very difficult to anticipate, again, both positive and negative. It’ll help us develop better beneficial technologies, but it may also result in better weapon systems, and in the very long run our advisors HP: , and Jaan Tallinn, and Stuart Russell have raised concerns that we might develop systems that are very difficult for us to effectively control and whose – and the consequences of their actions may be difficult for us to anticipate ahead of time. 

RS: One of your colleagues, Professor Price is quoted as saying that it seems a reasonable prediction that sometime in the next century intelligence will escape from the constraints of biology. Do you agree with that, and what do you think it means? 

SÓH: I agree with it in principle, the idea that at some point intelligence might escape the constraints of biology. I don’t think there’s anything inherently magic about human intelligence, and because I don’t believe that I think that’s entirely plausible that we would create – we ourselves might create intelligence that would be of an equivalent level of capability in terms of how it would change the world around us. We, ourselves with our intelligence have shaped the world more than any other creatures that have come before us. However, I would say that there’s no guarantee that the sort of general intelligence we might create would be the same as the sort of intelligence that we have in our head. So, I would draw an analogy to flight. We took some general principles from how birds fly, but certainly an airplane doesn’t work in the same way that a bird’s wings do or on all the same general principles. So, you can achieve something with the level of capacity without necessarily matching exactly what your experience with, which is one of the very challenging things both about how we think about artificial intelligence and what we might achieve in the future. And also how we anticipate its impacts because we can’t simply anthropomorphize and say that well, a human would never do this thing, therefore why would AI do it? 00:09:54 It’s going to be potentially more alien to us than any intelligence that we have ever encountered before, while at the same time potentially having a greater power to advance scientific progress, advance problem solving, advance engineering compared to anything we’ve done before. 

RS: How do you define artificial intelligence? 

SÓH: Artificial intelligence can be a bit of a tricky thing to define. There’s a joke in the field that it’s only ever artificial intelligence until we know how to do it, at which point it just becomes software and we regard it as very mundane. So, in the past, calculating large numbers might have been artificial intelligence and now it’s just software. Language translation is going from artificial intelligence to mundane software. A broad definition would be that artificial intelligence is any cognitive capability that we normally associate with being within the domain of what humans are only able to do, but it’s a very fluid definition. Within the definition of artificial intelligence I think there are two important distinctions. One is the idea of what is often called narrow artificial intelligence. So, right now most of the artificial intelligence that we see in the world comes under the narrow definition in that these systems are designed to solve a particular problem within a particularly set environment, but aren’t necessarily able to do anything else.  So, deep blue is able to play chess better than a grand master. It’s not able to play a simpler game like checkers. It’s certainly not able to make a cup of tea or to tie its shoelaces. Now, you can design a system that’s able to make a cup of tea, but it won’t be able to play chess. There’s a different idea, which is of artificial general intelligence, and this refers to the idea of an intelligence that is much broader in its scope and capability. And right now that’s the domain of humans and animals. The ability to go into an environment with lots of different things happening and lots of different types of information, and orient ourselves and build our remodel of the world around us, and then take actions that aren’t pre-defined in order to achieve a wide range of goals. 00:12:28 It’s long been a goal within the field of artificial intelligence to create systems that are able to do this. So, rather than just follow a predetermined set of instructions or operate in a very limited way, develop a system that is able to – well, to give an example, what if you had a Mars rover, that rather than just going through a set of steps you could just send it out and say, look for something interesting, design a few scientific hypothesis, carry out those experiments and figure out what we should know about Mars. If you were able to develop a system like that it would be tremendously powerful and important. And right now that’s long been a goal in the field but research in this direction is still at an early stage. However, progress seems to be going quite well towards it and there’s been a renewed enthusiasm within the field for work in this direction in recent years. 

RS: Where are we in the arc of understanding artificial intelligence? Early days?

SÓH: The question of where we are in understanding artificial intelligence is a tricky one to answer because if you ask ten experts you will get ten different answers. I think there’s a general sense that progress is proceeding at pace and potentially faster than it has in recent years. However, there have been – there’s a long history of enthusiasm followed by roadblocks, if you will, in artificial intelligence where new directions seem to open up all sorts of possibilities but only brought us so far. So, there have been paradigms of what was called good old fashioned AI, which is very much based around logical systems. There have been paradigms of expert systems that were again, very much rule based and were very powerful, but again, had their limitations. Right now the dominant paradigm is machine learning and systems that are using broadly statistical based methods are able to take in information around the world and spot the patterns and draw out the patterns from that information, and take actions based on that. So, a self-driving car is able to take in information from various vision systems and others in order to figure out how to navigate around an environment. However, right now that’s not coupled with some of the things that humans can do, such as taking all that information around the world and then put together lots of logical steps, and problem solving, and creativity around then knowing what to do in the world and how to achieve what we might want to do in the world. 00:15:10 So, there’s a big question about to what extent we’ll be able to serve – marry these different levels of capabilities so that systems are able to really take in the information about the world and not just act blindly on that but really understand the world and come up with new ideas, formulate hypothesis, be creative in a meaningful sense. And again, if you ask ten different experts about when we’ll achieve that sort of understanding, you’ll get ten different answers. Some people will say well, recent progress has been very impressive. Others will say well, we only really have one proof principle of these sorts of abilities, and it’s the human mind, and we are so very far from understanding the human mind that it would be hubristic of us to say that we would be able to recreate these sorts of capabilities within five years, ten years, or 15 years. What I have heard from speaking to experts is that there’s a broad range of expectations on when we would achieve this level of technology. Some people – hardly anyone says less than 20 years to develop true general artificial intelligence. Some people will say 300, 400, 500 years out. Some people would say maybe never. But there have been a number of recent surveys that have been quite comprehensive and it seems our large body of experts think that it’s quite possible we would achieve this within our lifetimes, perhaps within the middle of the century. And that would raise so many questions about what it would mean for the world, what it would mean for our place as humans within the world. That we think at least some groups should think about these issues ahead of time and start thinking about how we prepare ourselves for a world with artificial intelligence. 

RS: Well said. You alluded to it, but talk to me about what you see as the benefits of AI. We hear about it in the workplace, we hear about it in healthcare. Where are the biggest added values for AI? 

SÓH: I believe there are an awful lot of benefits to AI. I could speak all day about this topic. To give some near term examples, there are many, many aspects of healthcare that we might be able to improve with artificial intelligence. Both in terms of developing better treatments, and also I would hope to bring these treatments to a wider array of people. So, examples of projects people and companies are working on at the moment include automated ways of diagnosing tumors. 00:17:53 And if you think about that means, right now if you’re going to be looking at a tumor and identifying whether it’s benign of malignant, that requires an expert, a consultant with 20 or 30 years of training, which means that only a subset of people are ever going to be able to afford that treatment. Certainly it won’t be available to many people in the developing world. But if this was the kind of thing that a smartphone could do, powered by an AI system, suddenly you can bring that sort of treatment to a much wider range of people. So that’s one example. 00:18:25 Artificial intelligence is also being applied to looking for new [unintelligible] that might provide candidates for drugs because they can run through many, many more possible combinations and try many different approaches perhaps, a lot more than a human would be able to by hand. So, it may help us develop new types of drugs. We also expect to see synergies with advances in other areas of biotechnology. So, we have a better ability now to sequence our genomes to gather information about all these different things that happen within ourselves, our proteins or metabolism [ph] and so on. All of that infuriation, if you only have humans to work with it can be completely overwhelming. The sorts of tools our artificial intelligence provides us, allow us to analyze that huge amount of data, spot the patterns in it, perhaps design experiments, try and figure out what’s actually happening there, and gain a deeper understanding of the fundamental things that go wrong when our biology goes wrong. So, the benefits just in healthcare are huge. But you could expect similar benefits in a wide range of areas. So, wearing my risk hat, I’m very concerned about issues like biodiversity loss and climate change. And these issues have at their core, a need to understand huge amounts of information and very complex interconnected systems. And if artificial intelligence provides us the tools to better be able to understand, to model, to propose interventions on these, it may provide a very potent tool in our arsenal to combat these great challenges. And that’s even without commenting to issues like the various areas of work that we might be able to automate thus freeing up people’s time to do more creative things with their time. Again, I come back to this idea that artificial intelligence is general purpose. A little bit like the steam engine, it’ll find application in a wide, wide range of the things that we do in the world today. Some of those we can see right now. A lot of them are difficult to anticipate ahead of time. 

RS: So, some of the – we were talking benefits – some of the challenges – and I assume the challenges would include potentially unintended consequences of artificial intelligence, correct? 

SÓH: That’s absolutely right. There are some consequences that might be intended or might be obvious. So, a lot of people within the research community are worried about the potential application of artificial intelligence to various aspects of warfare. Both in the design of autonomous weapons that would be able to take independent kill decisions, but also in the more systemic issues of AI systems managing lots of information around a wartime situation, 
make recommendations to a general and perhaps that general would not necessarily have a deep enough situational understanding of the context in which she or he is making a decision based around it. There are some other near term concerns about whether AI systems might, by drawing on the information of the world around us, pick up on biases in the historical data. So, concerns around – So, there are some near term concerns that AI systems that would be developed with only beneficial purpose in mind might pick up on historical biases in the data that they’re being trained on. So, a loan recommendation system might pick up on the fact that loans are more often approved for people from white middle class backgrounds rather than people from lower income backgrounds, which might lock in certain inequalities and thus raise the level of inequality in our world. Some people are concerned about what this will mean for people’s livelihoods if we are automating a wider range of jobs than we ever have before. Will we actually be able to find ways of supporting the people who end up being put out of job? There are some concerns as well about how artificial intelligence might impact the landscape of cyber attack and defense because it could be a very powerful tool in developing both new types of cyber weapons; it may also help us develop better cyber defenses. But it’s a little bit hard at this point to tell whether attack will be helped more than defense will be, so this is an area of live research for us. Looking a bit further ahead, when it comes to this idea of developing general artificial intelligence, if we develop AI that is able to achieve really sophisticated, powerful goals in our world, it can be very difficult to very carefully and specifically design a goal and a way of achieving that goal that if you give the system the creativity it needs to do – to achieve that goal well, there’s a potential that it might achieve the goal but in ways that we might not have anticipated and with consequences that we might not have anticipated ahead of time, even if the goal itself is a positive one. And one of our advisors on our Tenth of the World [ph] charity foundation project, Professor Nick Bostrom has written at length about this in his book, Super Intelligence. Speaking more broadly, we might think of ourselves as an example of what a general intelligence does in a world. Many of the things that we’ve done were not done with any negative consequences in mind, but a result of the inconsequence of that have not necessarily been so good for the long term health of the planet. We are burning carbon at an unsustainable rate. We’re wiping out species at an unsustainable rate. Not particularly because we hate the planet and not particularly because we hate the species around us, but just because we take actions as a species that sometimes have unintended consequences or that we fail to deal with properly due to [unintelligible] and other issues. Again, all of these things can be a little bit difficult to pin down ahead of time because we are looking into the future. But we think that it would be a mistake not to think about them at all and to hope that it’ll all turn out for the best. So, a lot of our research is trying to identify the various scenarios that might play out based on what we know about the technology today, based on what we know about how people anticipate using the technology, the world of governance and policy around these issues, so that we can figure out whether there are steps that we could take now to nudge things in a better direction. Or research directions we could take now to better understand where we might go if we take certain directions. 

RS: The Hollywood version of what you’re talking about is obviously how. Is it your effort to try and anticipate a future how? 

SÓH: To a certain extent. So, I think it can be dangerous to ground ourselves too closely to, for example, Hollywood scenarios, because it’s worth bearing in mind that when one creates a Hollywood movie, there can be a lot of scientific realism. Sometimes there can be a lot of creativity, but there are also constraints. A Hollywood movie always needs to tell a good story in which there’s a story arc and there’s a protagonist and so on. Often the way things play out in practice are quite different. Maybe they are less exciting in various ways. So, I worry that by being too focused on for example, a how scenario or a Terminator scenario or something, we are locking ourselves into only a small subset of possible scenarios, perhaps which don’t represent the most realistic ones, I think allow the ideas in for example, the Terminator movies are entirely unrealistic. At the same time they cause a great deal of worry. So, we might want to consider in our analysis a much broader range of things without becoming too grounded by the ideas we’ve picked up from science fiction and Hollywood. 

RS: Help me understand how you balance innovation, entrepreneurship, creativity, in technology and the risk taking that needs to go along with it, with the understanding that there needs to be – if I understand you correctly – some architecture around policy, ethics, and morals, about these sorts of issues? How does that come into play? 

SÓH: So, a big part of what we are trying to do as a research group is effectively to build bridges. We think it’s very important that there be open lines of communication and collaboration between the leading researchers, whether they be in academia or industry who are really pushing forward the technology. The academics who are thinking about the economic, social, philosophical, ethical implications, the risk, and the policy makers who very much need to have a deep understanding of these things. A lot of our activities have been based around trying to develop this community. So, these people are all talking to each other and working together as closely as can be arranged. One of our notable activities of the sort has been in collaboration with some of our colleagues in the U.S. The Future of Life Institute. We co-organized a major conference in January 2015 where we brought together the leaders of industry and academic research teams in AI. It included the likes of Google and DeepMind and others. But it also brought in academic researchers from the various disciplines that really brought that kind of expertise and about the broader impacts of the technology and a number of policy makers who wanted to gain a better understanding of these things. And really what we wanted to do there is put them in a setting away from for example, newspaper articles and journalists. They could just speak openly and under [unintelligible] rule about these issues. And that was very interesting. It resulted in ongoing collaborations. It resulted in an open letter that’s been signed by thousands of research leaders worldwide since, calling for more research. Not only in making AI systems more capable, but also in understanding and trying to maximize the societal benefits of AI while anticipating and avoiding the potential threats of it. And we’ve followed that since with a number of different activities of that sort. So, our cofounder Jaan Tallinn co-organized a workshop with one of Microsoft research leaders, and Lawrence Krauss from the Bulletin of Atomic Scientists earlier this year that we contributed scenarios to. And that was very much policy makers in the U.S. with academic and industry research leaders, and academic risk economic social science researchers thinking about what are the potential negative scenarios that we might anticipate, and how can we disrupt those? It was set up as a red team, blue team, where a red team played out the scenario and the blue team suggested interventions and blocks to these things. We’ve also been doing things like organizing symposia and workshops at the Major Machine Learning Conferences where we specifically bring in both research leaders in AI but also academics and policy makers, thinking about the impacts and put these issues on the table in the environment in which the world’s leading machine and learning community are present. And those have been very, very well received, particularly because many people within artificial intelligence machine learning are thinking about these issues and are very keen to get talking to people who have deeper understanding of the impacts of these, and find ways to work together. We’ve combined that with ongoing individual meetings with both policy makers and industry leaders. We’re very lucky here in Cambridge. We have a wonderful partnership with the Center for Science and Policy, which does a lot of this sort of thing. It arranges fellowship programs that allow our academics to meet with industry leaders and policy makers to discuss these issues in a one on one way. Which tends to result in follow on workshops. And their executive director, Rob Doubleday has been an advisor on our major project managing extreme technological [ph] risk at Cambridge. So, we have a whole suite of different activities that are aimed at trying to develop the broad community that is needed to really anticipate and mitigate potential negative impacts while also looking at what are the big positive impacts that we could nudge along.

RS: So, as humans we sometimes attribute human-like traits to our technology and machines. We call them thinking machines. Why do we do that? 

SÓH: I think we as humans have a strong tendency to anthropomorphize everything in the world around us to assign human traits to them. And we do this with everything from the weather, to the robots around us. It’s a way I think that makes it easier for us to understand the world around us. And sometimes I think it can be helpful, but a lot of the time I think it can be dangerously misleading. I think it would be misleading to attribute thinking and agency to systems that don’t really have it. I also think in the context of artificial intelligence it could be very dangerous to attribute the sorts of thought processes that we have to a system that operates on fundamentally different principles. Because that means that when we then try to anticipate how they might act in a particular situation, we might get it very badly wrong. So, one of our research directions has been looking at slightly more, sort of fundamental, mathematical approaches to decision theory. How a system makes decisions in the world around it and in the context of the systems around it, and try to take some of these intuitions and instead put them on to a more formal, mathematical framework that we can design some proofs for, rather than hoping that they’ll think in the same way we do when there’s no reason necessarily to think that’s what’ll happen. 

RS: I’m curious whether you think these thinking machines or artificial intelligence, whatever you may call them, will one day have consciousness. 

SÓH: The question of whether these thinking machines will have consciousness is a very challenging one. I’m not a philosopher and I feel that this sort of question is probably above my pay grade. Again, I come back to the quote – idea – that if we have consciousness, then I think there is no fundamental reason that we would not at some point develop something else that has consciousness. I’ve yet to see any scientific evidence to say that there is something inherent about squishy biological matter in our brain that means that that’s the only thing that could be a substrate for consciousness. However, our understanding of what consciousness is, is still so rudimentary that I don’t think we’d have any idea right now how to set about designing a system that would have consciousness. So, one question is, could we design a system that had many or all of the cognitive traits that allow us to change the world around us – design scientific experiments, solve problems, do engineering, understand the nature of the universe – without consciousness? I don’t know the answer to that. Maybe you really need consciousness to be able to really do this kind of thing. Another question would be, if you develop systems that have these capabilities, does consciousness just at some point emerge as some sort of emerging property? And again, I don’t feel that I have a deep enough understanding to give an answer on this. I’m not sure anyone quite does know the answer, although there are many people with different views on this. 

RS: Will it mean – regardless of whether the machine has consciousness or not – will it engender a redefinition, what it means to be human? 

SÓH: I think it probably will require a change of the definition of what it means to be human. Although one might argue that the definition of what it is to be human has been changing over the course of our history. At some point being human just meant we were creatures that were able to stand on our hind legs and use tools, compared to our hominid brethren that weren’t able to do that. At some point humanity came to be synonymous with the idea of big social groups, education, civilization, various sets of moral codes, and so on. Right now I think the definition of what it means to be human might change even more. We’re developing biotechnologies that allow us to correct errors in our genome in the future and remove horrible diseases that previous generations have taken for granted. Even things like most of us have external organisms that have been put into us for the purposes of making us more resistant to diseases. I mean, this is what a vaccine is. We’ve already modified our biology to make us more capable in our environment, even the fact that we are able to use technology so that we escape the bounds of the environment in which we can live happily. I mean, at some point we would only have been able to live in an environment that had readily available food and where the temperature didn’t go too high or too low. Now with air conditioning, clothes, buildings, all the rest, we can put humans more or less indefinitely on nearly any spot on our globe. I would regard that as a change of what it is to be human. I think once we bring artificial intelligence in, that will perhaps change what it is to be human even more radically, but I feel like these are ongoing changes all the time. 

RS: And what about the – although I have spoken to your philosopher, but I’m curious, as a director of the center to think about moral and ethical values attributed to machines, will they in the future be able to have similar sorts of values and views in them that we take for granted as humans? 

SÓH: I think that in the future this will be possible, and in fact I think it will be very much necessary. Right now people are starting to come up with moral codes and values to imbue in systems in a limited sense. And to a certain extent I think it’s misleading to think about this in the way that we think about morals. So, for example, people talk about the morals of self-driving cars, what they really just mean, a set of rules a self-driving cars would have programmed into them. And I think that’s in some ways very different from the sort of deep moral understanding that we as humans have. However, at some point if we’re developing systems that don’t just follow a set of rules or operate in a limited way in the environment, we’re not going to just be able to give them goals to follow. We’re going to have to give them some level of autonomy in terms of how to achieve those goals. And I think the only way to know that they will achieve those goals in a way that is sensible and not harmful to us, is to have some sort of deep understanding of what is good and bad behavior, and I think that’s fundamentally tied up in morality. Where this becomes very complicated is when you start asking two questions, which again are a topic that Huw and Jaan and others at our center are thinking about, which is – well, first of all, how do you turn a broad amorphous concept like human morality into something that can be instilled crisply into a new system, which is much harder than it looks. People talk about Asimov’s Laws, but turning that into something crisp is actually very difficult. Another question which we have a researcher who’s thinking about a bit in terms of the governance of this is, whose morals? Because we don’t all have the same morals. Even among philosophers there’s no agreement on what the correct moral code or moral framework is. We have very different values in different parts of the world, even between different individuals in different parts of the world. If you had a system that was out in the world affecting it in many, many different ways, can you come up with a moral code that is sufficiently satisfactory for all of humanity? Or is it necessary that you prioritize the moral values of one community over another one? And is that an okay idea or an entirely abhorrent one? Right now this is a deep question of debate within both the philosophical and technological community. It’s the idea that our advisor Stuart Russell has talked about as being value alignment. How do we come up with shared values and how do we design systems that are aligned to those values? 

RS: What about these machines being creative in the way that we understand humans to be creative? Being able to write a poem or draw a picture? Is that something that we will see in the future? 

SÓH: My view is that again, there’s no reason that we would not see it in the future. We’re starting to see some interesting rudimentary steps towards these things at the moment. And the question becomes well, how do you define creativity? And that’s one of the hardest things that you can answer. Last week I was at a demo where somebody demonstrated that by analyzing lots of jazz music a system could take a few bars of a jazz song and then compose a very sensible and quite fluent follow on set of bars to it. Now, if a human did that we’d regard it as skillful and creative. The AI system was basically looking at lots of other examples of what is good jazz music and coming up with something that kind of fit that criteria. My question to you is, is that creative or not? You might say well, if you did a [unintelligible] test on this and you asked somebody was this a machine or was this a human, and they couldn’t tell, maybe then the machine is equivalently creative to a human. Others would say that that’s completely reductive and that there’s something that humans are doing here which isn’t just mimicking the creative process in the way that these systems are. There are other types of creativity that I think we are seeing already. So, I think the example of the Go playing algorithm that beat Lee Sidal last year is an interesting example in that it performed moves during its matches that no human being would have performed. And in doing so opened up ways of playing Go that have pushed the field of Go playing forward by generations according to leading Go experts. And it didn’t do this through anything like the sort of creative process that we imagine when we think about creativity. Part of what allowed it to do this is that it was able to sample many, many more possibilities of what might work in a given situation and think further ahead, and thus choose a strategy that might have been completely counterintuitive to a human. But what I thought was interesting was when the commentators were commenting on these matches they said, this is a beautiful move. This is a remarkably creative move. And again, if it had been a human being it would have been creativity. With a machine it’s sampling lots of possibilities and taking one. So, where do you draw the line? I simply don’t know. If you have a system that is able to functionally recreate creativity without having a deep understanding of what it’s doing, it feels like a cheat in a certain way. But the end result is still something that’s creative. 

RS: Is one of your worries that we’re creating machines like this – and that’s a good example with the Go – that we don’t completely understand? 

SÓH: I think that this is a worry and one that we should take reasonably seriously. And this has both near term implications and long-term implications. So, in the near term one of the topics of research that we’re very interested in is the question of interpretability of AI systems and what that means for our trust in AI systems. If we’re bringing these systems into the world in various safety critical settings like self-driving cars, medical diagnostics and so on, we need to know how they’re coming to the answers that they are. It’s not just enough to say well, it’s better than a human 90% of the time. It would be very helpful to know why it’s coming to the answers, because you do need to worry a bit about that 10% of the time. And again, this is an ethical question. Some people say well, if it’s just better in the human most of the time why not just go with that, the end result is better. Whereas others – and I think I would class myself in that camp – would say well, even if it’s better most of the time I really want to know how it’s coming to that answer because the time it’s wrong could be really quite worrying. And I think it’s important that we start thinking about these questions in the context of the near term systems we have because it will necessitate design choices. If we want to really understand systems, have to figure out ways of designing these systems so that they are more transparent to us. Where they can pump out answers at various stages of the process and make it easier for us to understand. I think that by getting this right at the earlier stages of technology it’s more likely that we will be able to have a better understanding of how these systems are operating at a further stage. I think it is possible that at some point we will develop systems that are just operating at such a level of nuance or a level of speed that it will become unrealistic for us to be able to understand everything that they’re doing in every content. I don’t know if that’s entirely avoidable, but I think whatever we can do to make them as non opaque to us as possible would put us in a better position. If I were to draw an analogy to people, there are professors I’ve worked with who are far smarter than me and who in many settings I would just trust because they are just operating at a level way beyond me. But in most contexts, if I ask them how they’re coming to the conclusion they are, they should be able to break it into steps that I can understand. So, I can at least spot check how they’re thinking. And occasionally there will be leaps of logic that will take me weeks to break down and figure out how they’re coming to that, but they are able to explain it to me and I’d be very worried if they simply weren’t able to explain at all how they were coming to these answers that I didn’t understand. 

RS: Do all these machines have an on/off button so you can at least turn it off, or way too complicated [unintelligible]? 

SÓH: Well, I think this is a question – so, on/off is sort of an intuitive answer, but I think once these systems are embedded in the world around us in various ways, turning things on and off becomes a lot harder. One is that often if a system is for example, running your infrastructure for you or running your healthcare system for you, you want to be very careful before switching off because it will have negative consequences and you want to be – you definitely want to be right before you turn it off. You might draw an analogy to the autopilot of an aircraft. If the autopilot is doing a really important safety critical role, you don’t want to switch it off unless you absolutely know that switching it off is the right thing to do, even if you’re not totally sure what it’s doing at that time, because you might crash the plane. There’s also some concern that if you design a very powerful autonomous system that is pursuing a goal, it may find ways to work around your off switch. And this is actually – I mean, it sounds sci-fi but it’s an area of technical research. How do you design a system that doesn’t exhibit that kind of behavior? So, to illustrate, if my goal is to cure a particular disease, a type of cancer, I'm given some free reign to do that because the human designers don't exactly know how to cure diseases otherwise they would have done it themselves. Maybe I design a bunch of experiments, gather a lot of data. Then I try to avoid things that will stop me from achieving my goal of curing cancer. So, I avoid for example overloading my hardware so that everything slows down. I avoid bad research directions. It probably doesn’t help me cure cancer if I’m going to just be randomly switched off. So, maybe I avoid being randomly switched off. And again, it sounds very trivial but there are promising directions looking at how you design a system so that it still wants to achieve that goal without necessarily exhibiting this behavior. Well, I don’t want to be switched off in a particular direction. 00:50:11 And there have been some really interesting research papers published about this. 

RS: Or a human losing control of the ability to make sure the machine is working the right direction. 

SÓH: Exactly, yes. 

RS: So, on balance as we end here, exponential risk, is this something you feel as a center with a multidisciplinary approach something that we can successfully tackle? Are you an optimist on this on the whole?

SÓH: I’m very much an optimist, and one of the things that’s been really positive to me is just seeing how much people care about thinking in a long term way about the future. Seeing how much enthusiasm there is from research leaders in artificial intelligence, synthetic biology and so on, to work together with people from different backgrounds, whether it’s risk or ethics or so on, to think about how to beneficially guide technologies. I think there’s very much – it’s very much the case that the motivation and enthusiasm exists. I think that it won’t happen by default – I think that it won’t happen by default, which is why I think it’s important that there be centers like ours thinking about these things ahead of time. And I think maybe one of the things that’s really going to be different as we move forward with ever more powerful technologies is that we can’t fall back on a model that involves making mistakes, learning from those mistakes, and not making the same mistakes in the future, or making different mistakes. Because some of the mistakes we might potentially make will be of such consequence that we won’t really be able to recover well from them, or the cost will simply be too high. So, I think that’s one of the key challenges. In our past history we have typically forged ahead, make mistakes, learned from those mistakes, and well, to quote Samuel Beckett, failed again, failed better. Figuring out how we can avoid doing that I think is the crucial challenge of the 21st century because whatever the next powerful technology after nuclear weapons is, we don’t want to have a nuclear holocaust and then try to figure out how to avoid it the next time. We need to avoid that ahead of time. 
 

RS: Templeton funding, how important has it been to the project? 

SÓH: The support of Templeton has been absolutely central to the success we’ve had recently. What was really important about it is that these ideas, they probably seem – I mean, they seem intuitive to me and they seem intuitive to you, but they are very much far reaching and they aren’t the kind of research projects that are easily from the – by traditional funding bodies. Interdisciplinary for example, is an idea that many people think is a good idea but that doesn’t seem to have embedded itself in the traditional research environment yet. So, having a funder who is far thinking and is willing to sport this kind of forward looking interdisciplinary research, has made a really, really big difference. It’s also allowed us to really leverage the intellectual resources of Cambridge and our collaborators. So, while the project has allowed us to fund five wonderful post docs and a number of more senior research leaders in Cambridge, advising those post docs we’ve been able to use the project itself as the interdisciplinary core of a larger suite of research projects within our center. So, what Templeton’s funding has effectively allowed us to do is create something much bigger than the sum of its parts. So, in addition to the five postdocs we have thinking about horizon scanning for risk, decision theory in relation to artificial intelligence, responsible innovation technology, we now have specialists looking at how to implement these ideas in the context of synthetic biology. So, that’s Liter who’s funded from elsewhere. In the context of tipping points in the environment, that’s Tatuya who’s funded from elsewhere. We have somebody thinking about policy and artificial intelligence, again funded from elsewhere. So, what Templeton’s funding has really allowed us to do is build a research project that we would not have been able to do anywhere else and really leverage that into a much bigger thing. And none of this would have happened without the support of an unusually clear thinking and far reaching foundation. 

RS: Excellent. Good, thanks. Thank you. 

RS: Okay, great. So, just a piece of Hollywood, to begin with. I need you to look at me, and then for five seconds, look directly at the camera and look back at me and look at the camera… And look back at me. That’s perfect. For transcription, name, affiliation. Name, title, affiliation. 

HP: I’m Huw Price. I’m the Bertrand Russell Professor of Philosophy, here at Cambridge, and I’m also presently Academic Director of The Centre For The Study of Existential Risk. 

RS: Define for our audience, what this term, existential risk, means. 

HP: It means a risk which provides a realistic threat of either wiping out our species all together, or at least wiping out the kind of high tech civilized lifestyle that we’ve gotten used to over the past few centuries. So, in effect setting us back to the dark ages. I think most of us would count that, anything that did that, as an existential risk as well. So, a terminating threat to our species, or our civilization. 

RS: How big is the risk, in your mind? 

HP: It’s very unclear. I mean, there are some of the risks for which the risk is fairly well known and fairly low, like the risk of a major asteroid strike, or a mega volcano. But then there’s categories of risk that are hard to predict. One which is on a lot of people’s mind at the moment is the threat of nuclear war. And there is some recent work suggesting that even a small nuclear exchange might provoke a kind of nuclear winter, which would produce an existential risk in that sense. So, that’s a very serious one, although not one that CSER has been centrally concerned with. The ones we’ve been concerned with, partly because when we began, we thought that they were very understudied and probably rapidly changing category risk, are the ones associated with otherwise beneficial new technologies, such as synthetic biology or AI. Now, in those cases, it’s very, very difficult to judge the risk, partly, because you have to make a judgement of how the technology is going to change, over the period in question.
I mean, for example, if you are interested in assessing the risk of the current century, then you need to predict how powerful synthetic biology or AI is going to get over that time. 
My view is that in both cases it is certainly non negligible. So, somewhere in the range of naught to 10 percent, I would say, at a rough guess. But of course, I’d qualify that by saying, I’m not an expert, I’m a philosopher. Most of my estimate is coming from talking to people who know a lot more about it than I do. 

RS: Do you see synthetic biology and AI in the same sub trade, or… ? 

HP: No, I think they’re different in certain ways. One is, in the case of synthetic biology, some of the potential risks. The risk of some sort of new pandemic produced, as Martin Leese likes to put it, by error or by terror. I think that risk is quite high, but many forms of that risk are not, while they might be catastrophic, they might kill millions of people, they’re not going to be existential, going to threaten our civilization or the entire species. 
So, I think to get a truly existential risk on that side, you have to imagine the worst cases of that kind of thing, or something perhaps which threatens something else, on which we all depend. A new kind of virus that attacked all species of grass, for example. 

RS: The three areas that the Centre for Existential Risk focuses on, are? 

HP: Well, I would say that we focus on one thing, which is the potential of new existential risks arising from new technologies, and then the kind of, a belt of related things, such as catastrophic risk associated with more familiar kinds of structures, to some extent, the extreme consequences of climate change. I mean, one of the things that’s going on there, is that if you try to draw sharp lines, then you are cutting across interesting institutional boundaries in sort of academic boundaries of people’s research interests in various ways. And so, it’s very much in the interest of having a thriving research community, not to draw sharp lines. On the contrary, to encourage people to work across where those lines would otherwise be. So, for example, to encourage people in their daily work, not to draw a sharp distinction between existential and merely catastrophic, but to focus on those as a package. 

RS: So, you are taking a multidisciplinary approach to understand the existential risk.

HP: Yes, and I think that’s essential. 

RS: So, that means you have researchers and professors from various disciplines? Explain that to me. 

HP: Yes, and so we now have a quite large and thriving community of post docs, for example, who may come from backgrounds, such as bioscience and biosecurity, law and philosophy, decision theory, and I am suspecting that I’ve forgotten one or two, but a wonderful range. 

RS: As well as hard sciences. 

HP: Well, yes. Oh, environmental science, too. So, yeah, some people are trained in the hard sciences, such as in bioscience. 

RS: So, you’re quoted as saying that it seems a reasonable position that sometime in the next century, intelligence will escape from the constraints of biology. What does that mean? 

HP: It means that we are going to be sharing the planet with a lot of stuff which is not biological, which is, in many dimensions, as smart as we are. I think that’s a lot less controversial now, than when I wrote that, perhaps four or five years ago. It’s still a little bit controversial, as to whether in that time period, we’re going to have, where all the pieces we associate with human intelligence rolled into one package, so that we have what people call AGI, artificial general intelligence. Most people in the field think we will have that, but it’s possible that what we’ll end up with instead is a lot of machines, each of which is a sort of specialist in some dimension, doing the kind of things we do with a more general intelligence. But, either way, it’s certainly going to be the case that a lot of things which were traditionally just done by us or by other smart creatures here in the natural world will be done by machines. 

RS: Is that… Do you put a value judgement on that?

HP: Yes, and a positive one, because in many ways, of course, the things those machines do will be very useful and that’s obviously already the case. But it comes with a note of caution. 

RS: And that caution is? 

HP: The caution is that, we want to make sure, that as the machines get more powerful, we want to make sure that what we’re asking them to do for us, is actually the job we want done. So, we had to be very careful and the more powerful the machines get, the more careful we have to be that we’re not misspecifying the goal in some way. And some of the colleagues, I like to point out, there are famous fictional versions of that story going back into history, like the story of King Midas. He wanted to be rich, and wished that everything he touched would turn into gold and then found out that he should have been more precise when he tried to eat his lunch. Or the story of the Sorcerer’s Apprentice. 

RS: What’s your definition of artificial intelligence? 

HP: When we set up our spin offs into the lead to become central to the future intelligence, we got some sort of pushback from people who said, how could you be setting up a whole set of study, the future intelligence, when we don’t really know what intelligence is. And my response was to say, well, as a philosopher, I am a pragmatist. Pragmatists don’t think about intelligence, they think about practical things. So, my advice with intelligence is, don’t think about what intelligence is, think about what intelligence does. Once you put it in that practical terms, you just have to look around and look at all the things that humans are able to do, that even very smart other kinds of creatures, like gorillas or chimpanzees are able to do. The factors responsible for that, for that huge practical difference are the kinds of factors that we’re gesturing at when we use the term, intelligence. 

RS: And artificial intelligence? Is there a definition around that? 

HP: Well, that seems to be a means of something that we created for a purpose, rather than something that has involved in the natural world. 

RS: Well, you used the Midas example, and another, perhaps more contemporary, although somewhat dated, is Hal.

HP: Yes, yes. 

RS: Is Hal part of that future that you are concerned about? 

HP: Hal was the scientific advisor for the computing aspects of 2001, A Space Odyssey. It was a Cambridge trained statistician, called Jack Good, or IJ Good. And at that time, in 1965, when Kubrick was making the movie, Good had just emerged from 12 years or so of working for British intelligence. He had been trained in Cambridge here, and just before the war he had done a PHD here in Trinity, [10:46:32.00] with GH Hardy, the great Trinity mathematician, and had then gone to work with Alan Turing and Bletchley Park, the now famous concussion center. And on their evenings off, he and Turing used to talk about the future of machine intelligence, and they were both convinced in the 1940’s, that machines, one day, would be smarter than us. They worked together on early computers after the war for a bit, and then Good went off back to secret government work, from which he emerged in the mid 60’s and he went to Virginia as an academic and was then free to write about these issues, in his first paper, from 1965.  He says that a super intelligent machine would be the last invention that humans would ever need to make, because from that point it would do the inventing first. He sets out to calculate the economic value, as he calls it, an ultra-intelligent machine, and he searches around for a benchmark for economic value, and with his tongue in cheek, I think he settles on a Cambridge example, the great Cambridge economist, John Maynard Keynes. 
And he says that the value of Keynes, in economic terms, had been calculated at that point, at 100 billion pounds sterling. And Good goes on to say that an ultra-intelligent machine would surely be worth far more, although the sign is uncertain. But since it might give us a prospect of living forever, we might conservatively estimate its value at Mega Keynes, is what he says. That unit of value that he was proposing hasn’t actually caught on, but if you look at the current value of even the relatively primitive forms of AI that we have these days, we’re clearly well on our way. But the thing I want to come back to in that quote, is the little qualification, the sign is uncertain. And what good men were, it was difficult to predict whether it was going to be good or bad. There is huge potential as it was thought, for AI turning out to be really wonderful. But, there’s also a possible downside, and that’s what we need to keep an eye on. 

RS: So, from your perspective as a philosopher, your hope would be to help architect rules of the road for the future of AI and an understanding of what society is potentially getting into? 

HP: Yes, except that my own personal concern is a step back from that. Because, I’m not a specialist in AI or any of the other scientific fields that CSER is looking at. My role is something like sometimes I call myself an enabler, or sometimes we use the term, sort of academic engineering. What we’re trying to do in both our centers is to help to create the kind of academic, in a broad sense, we’ll come back to that, academic in a broad sense, communities that we’re going to need for tackling these challenges. Whether it’s the challenge of existential risk in any of these fields, or the challenge of making the best of AI. And I want to emphasize, that in the case of AI, and especially in the case of our new Centre, the Leverhume Centre for the Future of Intelligence, which was a sort of spinoff from CSER, the focus is not just on possible risks, although they are important, a much broader question is to how we make the best of the benefits. Sometimes, I put it like this and say, look, if we are designing a self-driving car, we know what we want. We want something which will take us safely and efficiently from A to B. In the case of artificial intelligence, in general, we want safety, of course. But at present, we don’t know where B is. We have very little idea of what the possible destinations are and try to make a choice between them. And thinking about those issues is going to require a lot of very smart people from a lot of different backgrounds and all over the world, because this is going to affect everybody. Everybody should be involved in thinking about these things. And our role is to help to build those communities, in a useful and connected way. 

RS: So, in your conversations with this broad group of people that you assemble, in terms of the upside of things like, AI, and before we get to the downside of things like synthetic virology in cyber, what are the possibilities over the next hundred years, that your comrades are talking? 

HP: I think… Well, pick one field, the health field. I think one thing that people are very helpful for is much, much better healthcare and treatments and major problems than we have now. Better diagnosis, much more rapid development of new drugs, targeted drugs for particular individuals, things of that kind. So, there are things which as a way presently are on the horizon, might be done much more easily with AI than those things beyond the horizon, that AI will help us get to. So, one prospect is that we humans might live much longer and healthier lives, and… 

RS: Because of AI?

HP: Yeah, because of AI. In this case, because of what AI will do for health care. 


RS: Self-driving cars, for example. 

HP: Yeah. Self-driving cars. Many people who talk about self-driving cars in the future, not very far away, perhaps at the end of the next decade, cities, as we know them, in the west, will be radically transformed, because there won’t be these huge hunks of metal sitting around on the streets everywhere. The self-driving cars will [inaudible] command on demand. Cities will be transformed, in the way that they were transformed in the 20th century, when we adopted the cars in the first place. 

RS: So, you paint a pretty interesting and rosy picture across two huge arenas, transportation and health. Why, as a citizen then, should I be potentially concerned? Are these machines going to be… Take over, much smarter, much faster, than I could ever…? 

HP: Well, I think they, in various respects, will be much faster and much more powerful than any single individual and their potential for doing things which are beneficial, are dependent on that power. On the other hand, as we were saying earlier, if we get the specification of their goals a little bit wrong, there could be unintended consequences. Some people in AI put it like this. They say, if you are building an AI, what you are doing in effect, is building a machine which has been designed to maximize some complicated mathematical function of many variables. You may find a way to do that, which involves varying the value of some other variable, which you didn’t think to specify, because you took it to be so obvious. Like, for example, the amount of oxygen in the atmosphere. If the machine finds that way of doing it, it will be doing exactly what you told it to do and doing it with great efficiency. But, the consequences could, of course, be disastrous. And it’s an issue that comes from putting a huge amount of power into one autonomous agent. Although, as we said before, there are very good reasons for doing that. 

RS: So, I guess, you know, one of the large questions is, in terms of researching and trying to understand this potential risk, are there, at this point, in its early days yet, still for AI, guard rails, ethical principles, legal frameworks that you think could be put around this issue?  

HP: Well, I think there’s some very interesting work on the technical side, and I think there has been some progress in recent years, although I don’t feel particularly well qualified to describe that, some of our collaborators, such as Stuart Russell, and his group at Berkeley, would be the kind of people you should speak to for a sort of authoritative answer on those things. But, I’ve heard talks on this stuff. I think there are some grounds for optimism there. The issue of regulation, I think, is more complicated. I do think we should keep in our minds the possibility that AI will need to be regulated at some point. That’s of course, a difficult conversation to have for a number of reasons. One is that it needs to be had internationally, and another is that it needs to be had sort of in an environment where there are a lot of large commercial players that are doing very well out of AI, at a time in which regulation is not the sort of political flavor of the moment. So, there are sensitivities there, which need to be negotiated. But, I think that’s another reason for trying to develop a community, a broad community, with as much cooperation in it at this early stage, to make it easier when we get to some of those tricky questions. 

RS: I’m curious, as a philosopher, what do you think about the possibility of these intelligent machines having consciousness? 

HP: That’s, of course, an extremely interesting question. 

RS: Thank you. 

HP: I mean, because, it seems to raise the question as to whether at some point, we’re going to have to be concerned about the interests of the machines, themselves. I do things so that we can face a choice point. We can set out to deliberately ensure that we never produce machines, of which that’s true. Machines are always things we can ethically think of just as tools or instruments we keep in boxes. We don’t have to worry about turning them on. That’s one development path. There’s another development path where we recognize the value of consciousness. We recognize everything that’s valuable about human life depends on consciousness and we set out to create more value in the world of that kind, by creating machines that are conscious, too. Some people think that might be part of the solution to safety worries, what makes human societies comparatively workable, as it would. Societies are made up of millions and millions of sort of selfish, competitive individuals. But most of the time, we get on quite well. Why is that? Well, it’s because we have some sort of innate dispositions, kinds of things which tend to be reflected in our moral systems and sort of keep everybody in check in a good way. 
Some people think that we should be trying for something like that with the machines, too. And, I think that’s related to the question of to whether we get the machine’s consciousness. I mean, it’s a very difficult question, because philosophers have very little idea what consciousness is and what it is that distinguishes a conscious system. 

RS: Does that mean, in the future, for example, that technologists and scientists may be able to imbue these thinking machines with moral and ethical values, to be able to make choices? 

HP: Well, in some sense, they are making choices already, because if you’re building an autonomous system for doing some job, then you’re building something, which in some sense, is an agent. So, it’s considering a range of possibilities and making choices between them. And we’re already familiar with things which are not conscious, but which are agents in that sense, such as corporations. There are hundreds and thousands, probably millions of corporations in the world. [11:00:24.00] Each of them has a sort of decision making process, on the basis of which it does that. So, there’s a lot of unconscious agents. Maybe AI’s would just be like that. But, as you say, it’s possible that we will put consciousness in them, but we’ll give them kind of an ethical sense, on an analogy with our own. In that case, there will be a richer kind of agent. There will be something, for a loss, that would be called a moral agent. And because they have consciousness, they would also be what philosophers call a moral patient, somebody to whom we owe our consideration or something to which we owe consideration. 

RS: Will it, then, as a philosopher, in your mind, make it incumbent to redefine what it means to be human? 

HP: Well, we might stick with the definition of human but recognize that being human isn’t the important moral category. I mean, philosophies use the term, person, in a way which leaves it open whether only humans can be persons or other kinds of things can be persons, too. So, if you use the term, person, for something, which is sort of deserving of moral respect or something like that, and has moral agency of its own, then the conclusion would be that we’re looking at a future in which some of the machines are persons, in that sense, too. 
To give you an analogy, I mean, there’s been debates as to whether we should think of chimpanzees as persons. And I think in some legal jurisdictions, it’s been successfully argued that at least for legal purposes, they should be. So, they are considered to be creatures, which, as individuals, have the kind of interests that we take to be marked by personhood in humans. So, we’re already familiar with the idea that the human species and this more ethical category of personhood might not line up precisely. And the possibility that we’re considering is that machines, artificial intelligence, might provide a new extension of it in a new direction. 

RS: There’s also a term in use these days called transhumanism, which means a lot of different things to a lot of different people. What’s your definition? 

HP: Well, I’m not particularly involved in any way in that transhumanist movement, but I think there are interesting and relatively short term questions which are not questions about, as it were, the capabilities of machines which are completely independent of us. But about the capabilities of human machine hybrids of various kinds. There are a lot of people working on ways in which we can interface machines which have a lot higher band widths than the clumsy things we have now. 

RS: Little do we know. This clock, and the bell tolls. So, do you want to pick that up? 

HP: There are a lot of people working now on ways of improving the ways in which we interface with computers, to give us, in effect, a much higher bandwidth. Because if we rely on typing or talking or sight, sight is a high bandwidth input stream, but it’s the output stream which is so incredibly slow. Some people are working on direct brain machine interfaces of various kinds, that we might have little implantable chips in our brains, which would give us sort of a direct high band interface with a computer. Some people think that in effect, what we will be doing if we can do that, is to enlarge our own mental capacities, by adding on pieces selectively. That’s one thing that you might mean by transhumanism, and I think it’s going to be possible, at least in small ways, relatively soon. I think work on these interfaces is actually proceeding quite fast. 

RS: Does that worry you at all? 

HP: No, I don’t think so. We’ve been enhancing ourselves in various ways including using digital technology but yes, these things, for a long time, and typically, I think, we like where we get to as a result, because there’s some places where you might put question marks. We read somewhere recently simply arguing that we lost a generation of children because they all walked around looking at their smartphones. So, we might want to tweak a bit. But in general, I think there’s lots of potential for individual humans living even richer and more interesting lives than they do now, by adding on bits of technology. 

RS: Go ahead. 

HP: I was going to say, what this means is that if, at least in the case of AI, when you start to talk about extinction, there’s a number of possibilities you need to consider. I mean, at the sort of bleakest end, our species might come to an end with no success. But then there are a range of less bleak possibilities, which is that the purely biological species, comes to the end and there are no humans around who won’t have access to this enabling technology, which at least makes in some sense, something which is trans human. And then, there are possibilities where the biology of the species comes to an end, but machines with the kind of qualities we take to be valuable in humans, machines that we might, in some metaphorical sense, think of as our children, succeed us. So, those are very different possibilities. You might be worried about all of them. You might be worried about none of them or you might draw a line. 

RS: Do you see either of those as potential future possibilities? 

HP: I think they’re all… I think at this point, they’re all potential possibilities, and I think that one of the questions we should be thinking of when I talked earlier about trying to decide what destinations are possible for the AI revolution and we do what we can to steer towards the ones that we want, rather than the ones that we don’t want. At the moment, I think all of those are possible destinations. Some of them, the most disastrous ones, where we don’t survive and the machines don’t survive, either, those are clearly disastrous, and we should be doing everything we can to avoid those. 

RS: Unintended consequences are generally just that, unintended consequences that one didn’t think would happen. I harken back to many conversations with Dr. Vint Cerf on the internet, and said, geez, had I thought about that, in the early days of the Internet, I wouldn’t have worked on this, that, or the other thing. So, in your mind, thinking through issues like AI or whether it is synthetic biology or cyber or climate, how do you and your team wrap your head around issues of existential threat that could end a planet, I mean, on a day to day basis?  These are huge issues that are hard to wrap your head around.

HP: Well, yes, and so what you have to do is, among other things, is to try to break the thing down into pieces, and you break it down into pieces by breaking it down into different kinds of threats. In our case, it may be the ones coming from different kinds of technology and then there’s another piece which is the difficulty of prediction and so you can sort of talk to people who are in the prediction business, and look at different ways in which people make predictions. Notoriously experts are very bad at making predictions, but there are various tools out there for so-called horizons scanning. So, one of our groups has been involved working with people who are doing horizons scanning in a more familiar context for environmental risk and trying to extend those techniques into the arena, where we are talking about existential risk. So, that’s something you can do. Of course it’s bound to be readily available, as you say. Prediction is very, very hard. But, there’s a case for trying. At the end of one of his classic papers, you know, the paper in which Alan Turing introduces what’s now called the Turing test, in the very last paragraph he says something like, we can’t see very far ahead, but where we can see, we can see much that needs to be done. And so, there’s something of a sense there of his realization in the early fifties before his death that this was the beginning of something very, very big, and we needed to put time into thinking about where it was going. 

RS: Big question for you. I’m just curious about your answer to this. Where are we in the technological revolution, in your mind? 

HP: In the case of the AI revolution, I think we’re still in the early days of it. Because, at this point, I think we're nowhere near artificial general intelligence, AGI, let alone super intelligence. People’s opinions differ very widely about how far away we might be from it. If you do surveys of experts in the field, what you get is a sort of something like a 78. Confidence level is sometimes in the second half of the century, but with a very wide spread. So, there are some people, we have some in our community, who think it’s unlikely, not only in this century but in the next century. Maggie Boughton, one of the pioneers in cognitive science in the UK, and a great supporter of what we have been doing in CSER and CFI, and she is in that camp. She is very much a skeptic of AGI. But then there are, I know other people, including some of the technology people, who are more like 30 years. From our point of view, the sensible thing is to think about the question well, if they’re laughing at the people who think it’s only 30 years away, what should we be doing now? It’s unlikely that we’ll make any big mistakes, but being more prepared than it potentially turns out we need to be if it’s the other people who are right. 

RS: So, answer that question for me. What should society be doing now? 

HP: One thing it should be doing is doing everything possible to ensure that when we need to have serious cooperative corporations to prevent AGI from turning out to be a disastrous competitive war game. We’ve got some mechanisms in place for doing that. We’ve got some understanding among the technologists of the world, among governments of the world, that this is something. If it’s on that time scale, then it’s something  as serious as negotiating about controlling nuclear arms, if not more so. We need a sort of very wide spread awareness, that this is something which is going to be on our table at some point. And the sooner we get that awareness, the better it might be. 

RS: I’m assuming that you think an on/off button will not be enough. 

HP: No, that’s right, because this is… The way someone like Stuart Russel likes to put it is say well, look, if you make a smart AI and tell it to keep your floor clean, then it’s obviously going to do everything it can to avoid things which threaten its own existence, because that would prevent it keeping your floor clean. So, if there’s a hole in the middle of your kitchen floor it won’t fall into it. The smart machine will go around it. If it sees that your dog is threatening to pull the cord out of the wall, so that it can’t recharge itself, it will disable the dog, because the dog would be preventing it from doing the job of keeping your floor clean. Extrapolate that to some smart machine which has been put in charge of some huge piece of infrastructure, and unless you find some way of dealing with this problem in advance, it’s going to make sure that it can’t be turned off. Which, of course, it might do in various ways. I mean, providing alternative power supply, keeping copies of itself somewhere else. I mean, after all, it’s connected to the internet. How we can know where it is, in any meaningful sense. 

RS: Synthetic biology. What worries you about that? 

HP: Well, I mean, the main worries that people have in this field is the worries that the new techniques for a recombinant DNA will make it very easy to produce new kinds of pathogens for particular purposes, by error or by terror if the [inaudible]. But, those capabilities will end up doing tremendous harm. 

RS: So, is it non state actors you are worried about, or government? 

HP: It’s both. Of course, there are governments in the kind of rogue government category. There’s also normal sorts of governments not taking adequate safety precautions, and we have known cases of, for example, foot and mouth disease, escaping from government labs which were supposed to be high security labs. So, that’s the error aspect. But part of the problem is, that as these technologies become more widely available and cheaper, we will eventually get to the point where it will be possible for a small group of individuals working entirely on their own, to you know, think, produce a new kind of bug, email the blueprint off to some lab and they’ll get their vial back with some new bacteria, with some new genes. 

RS: Potentially catastrophic. 

HP: Oh, potentially. At least potentially catastrophic at the way in which we know only pandemics can be, something like, at the level of say, the Spanish flu, from the 1918, 1920 period. There are millions of lives at risk, around the world. I think to get to something genuinely existential, you have to do a bit more work. One possibility might be something which targeted something on which doesn’t target humans directly, but targets something on which our food supply depends or something like that. 

RS: So, just so I’m clear, existential takes us beyond catastrophic. 

HP: Yes. I mean… 

RS: End of civilization… 

HP: Yeah, so, the end of the human species, or the end of civilization. I think, in one sense, it’s important not to try to draw the boundaries too sharply, because if you do try to put a sharp boundary between existential and catastrophic, you’ll find that the questions are very, very similar on either side of the line. You want people to work smoothly across that territory. It’s not going to be helpful to draw a line and say, oh, no, we want you to think about stuff on the left hand side of the line, but not stuff on the right hand side. On the other hand, it is helpful to point out that the existential case is on the spectrum. It’s on the far end of the curve, and philosophers have made the point that there’s a good case for saying that it’s not continuous with the rest of the curve. This is an argument that goes back to the great Oxford philosopher, who died recently, Derek Prophet. He says, intuitively, if you ask people to consider the difference between a disease which wipes out 98 percent of the population and one which wipes out 100 percent, they’ll think that’s a very small difference, but they’re wrong. Because in wiping out the last two percent, they also wipe out the future, possibly a future in which there are many billions of people. So, that’s why the existential ones matter, and that’s why it’s important not to forget. And that’s why having the kind of red team, like CSER, which explicitly calls attention to it and having it in place like Cambridge, is such a useful thing for everybody else. 

RS: Cyber [inaudible]? 

HP: We hadn’t been doing very much on that, so far. It’s something that if we had more resources, we’d want to do more of, because, of course, some of the security issues that come up in that context are going to be very similar to the ones that come up, if you are interested in monitoring to prevent existential risks, there are going to be similar sorts of issues. 

RS: All right, so let me ask for a moment, climate environment? 

HP: Yes, and so, among the categories of catastrophic and existential risk that perhaps got less attention than they should, was the sort of long term potentially catastrophic risks of climate change. Most of the modeling is done where the bulk or the probability is. But in all the models, there is some small probability that things await some major tipping point, and things would go disastrously wrong for the planet. People sometimes talk about the Venus scenario, where the air thins out in the kind of condition that Venus is in, which is incompatible with life of any kind, at least life as we know it.  So, we did a little bit of work on those sorts of issues. One of our projects is on climate tipping points, trying to do a better job of identifying how the climate might react and particular reactive ways which tips it off some sort of cliff. 

RS: So, among these issues, and I need to ask this question, which one really keeps you up at night, or do they all? 

HP: Probably AI, because, I think AI is in some ways, different in character than the others. Also because I think it has a different sort of payoff curve. This is a very familiar point, that is in no way original to me. In fact, I mentioned IJ Good, before. And IJ Good said that an ultra-intelligent machine would be worth a Mega Keynes, but the sign is uncertain. What he was pointing out is the potential payoff curve is U shaped. So, there is utopia on one end, and dystopia on the other end, and not much in the middle.Whereas, I think for synthetic biology, the curve has a much more normal shape. There’s lots of potential benefits. New antibiotics, individualized treatments for cancer. All kinds of things of that kind. But then there’s a sort of tale of the risky stuff. So, in that sense, yes. Well, things don’t keep me up at night very much, but if they did, it would be AI. You know, I’m a grandfather. If you ask me which one worries me most for my grandchildren, then it’s AI. 

RS: There’s another quote of yours I want to ask… As computers become smarter than humans, humans can someday be destroyed by, “machines that are not malicious, but machines whose interests don’t include us.” If we’re programming them and we are in charge of them, how could that ever happen? 

HP: Well, really that point is the key minus the sources of practice point. The goals of the wishing, the goals we put into them. But, if we put the wrong goals in and the machine discovers that in order to do the job that it’s been given of making the city run efficiently, discovers that a wonderful solution is just to get rid of the humans who are plugging up so many of the public spaces,it’s not being malicious, it’s just doing what we told it to do. So, it’s being an efficient city planner. 

RS: So, as you peek into this future, however ill-defined it may be, 30 years or the next century. I mean, Morton, for example, is pretty sure that this entry could be a tipping point. Where do you come down on optimism versus pessimism, trying to understand and potentially contain existential risk? 

HP: Well, most of the time, I’m an optimist, but borrowing a tone from somebody else who I would like to describe as a mindful optimist, the kind of optimist who believes in making their own luck, rather than an insouciant optimist, who just lies on the beach and waits for things to come to them, I think if we do work at it, we’ve got a good chance of getting to somewhere very good at the end of the century, but it does require a lot of work, by a lot of people with good will. 

RS: In translating the academic work that you’re doing here that place Cambridge into real policy, I mean, there’s a huge leap there. 

HP: Exactly. And that’s why as we try to build communities to think about these things, we need to make sure the communities which are not only well connected horizontally, as I like to think of it between academic disciplines, but also very well connected vertically, so, to the policy world in one direction. For me, that’s sort of upwards and then to the tech and corporate world in the other direction. And that’s been part of our model from the beginning, to try to have those connections. 

RS: I’m curious again, as to the other questions, as a philosopher, I think of this device that Steve Jobs engineered, and all the unintended, I would think, suspect, unintended consequences, of a device like this that has revolutionized our lives. When you think of technology and technologists, in particular, do you feel they have humanity’s best interest at heart? 

HP: I think, like the rest of us, they are variable. So, some do, more than others. Like the rest of us, they are fallible. Like the rest of us, they could be influenced, too. So, they want the guru and approval of their fellows. So, I think if we can create communities, whereby… Communities which offer that goodwill, but to good citizens, in the case of technologists, the technologists who make it part of their vision for the way in which they develop their technology, that they try to do so responsibly, then they do so responsibly and with appropriate consultation with wider communities. So, if we could sort of engineer that culture of, to use an overused term, responsible innovation, then we will be in a much better position, than if we don’t try to do that and we sort of adopt a sort of laissez faire approach. That’s the conceived optimism that I mentioned before. 

RS: Well, I think that the guard rails that you are very familiar with around ethics and morals that may not be at the forefront of the scientist or technologist’s view, as he or she is developing a new technology. So, I guess my question would be, you know, how you inculcate a wider world view, in developing technologies, than you know, I think with the atomic bomb, as an example, in terms of history. But from your perspective as a philosopher, how do you inculcate those sorts of values into a very rigorous and generally hard science around technology? 

HP: You take it back to the fact that the technologists are people as well as scientists, or technologists. You try to create a climate in which if any particular company or development lab doesn’t behave in this way, they will be to some extent stigmatized. So, you create, try to create the sort of social pressures which nudge people into doing the right thing. 

RS: I’d like to use a hard example of Google’s mantra, do no evil. It’s interesting. You know, for a technology company to frame. 

HP: Yeah, and it’s a good start. But, Google, like everybody else, needs some nudging, too. We can see nudging going on at various parts of the technology space of Uber. Uber just got a very big nudge from the local authorities in London. That’s a good thing. And, I imagine they’ll respond appropriately and will have one of these very familiar cases of… Companies sort of act as beneficent as the result of getting the right sort of encouragement. 

RS: Even, that’s interesting, just [inaudible] we’ve been doing a lot of what of what… Because Templeton, science versus science and religion and science being very good at the how and figuring it out and faith and spirituality, looking at more the why. And, having covered Silicon Valley for as long as I have, sort of thinking through those issues of are these mad scientists holed up in places like Xerox Park, or MIT, also moral captors, thinking about the implications of work that they are doing. 

HP: Oh, the people that I know in those communities, I think are highly moral characters who are deeply interested in not just in narrating issues, but in the broader questions about their implications. 

RS: I think that was terrific. I mean, we can go on for hours. But, I think that was a wonderful overview. Did we miss anything? 

HP: I don’t think so. It felt like fun from my point of view. 

RS: Just for transcription, introduce yourself.  Name, title, affiliation.  

CR: So, my name’s Catherine Rhodes.  My job title is Academic Project Manager at the Centre for the Study of Existential Risk in Cambridge University.  

RS: Define existential risk.  It’s a big word.  What’s it mean to you?  

CR: So, to me, it’s – partly it is to do with threats that could wipe out humanity.  So, at that scale.  But also for the purposes of personal motivation, it’s to do with the suffering that would be caused to billions of humans in the process of one of those threats evolving.  So, even if it didn’t get to the stage of wiping out humanity, that huge amount of suffering is a big interest.  

RS: And what kind of threats?  

CR: So, in particular, I have a focus on biological threats.  I don’t think any of those of themselves or just by themselves would be an existential risk, but when we sort of combine them in terms of human systems and societal resilience, if they came in combination with other disasters, like natural disasters perhaps, or just simply whether we, as societies, can adapt quickly enough to respond to them.  

RS: When you talk about biological risks, are you talking about bioterrorism?  Define it for me.  

CR: So, I would break it down into natural risks – so, things like pandemic influenza – and then sort of human-induced you could call them – so not necessarily deliberate actions, but things like antimicrobial resistance where there’s a clear human factor behind that – and then also deliberate actions.  But those could be accidental.  So, you’ve created something, and you accidentally release it.  So, it was deliberate to create it, but not to release it.  It could be accidental in the sense that you did deliberately release it, you just didn’t understand what the, say, environmental consequences of releasing a biological agent would be.  [12:30:02.00] And then as you hinted, there’s the deliberate misuse, which could be terrorist, or it could be state.  

RS: And rank these risks for me.  

CR: In terms of getting to the existential level of risk, I think the top one currently is if states were to have biological warfare programs, because at the moment, terrorists can’t get to the level of sophistication that states can.  And future terrorists may be able to get to that similar level of destructive power.  And then in terms of natural disease threats, yes, pandemic influenza is probably the main one at the moment.  That’s expected to happen within the next few years.  There will be another pandemic outbreak.  Again, maybe not at an existential risk level, but certainly millions to hundreds of millions.  

RS: Really?  What are the governance, current governance issues around bioweapons and bio in general in terms of states at this point?  

CR: So, in terms of states, there’s a longstanding international treaty, the Biological Weapons Convention, that has a strong norm against misuse of biosciences.  That in itself is something very valuable, but I would say it suffers from some weaknesses.  So, one is it has no verification system.  Other arms control, so, the Chemical Weapons Convention, for example, has an extensive inspection and verification regime attached to it, and the Biological Weapons Convention doesn’t.  So, that’s one key weakness.  I think another one, in terms of particularly the sort of technological risk side is that there’s very limited ability to keep track of scientific and technological developments.  So, there’s a five-yearly review process which is supposed to take scientific and technological developments into account, but again, that’s every five years.  We know how rapidly things are moving.  Particularly we saw CRISPR a couple of years ago, and now the new CRISPR called base editing.  And so, just within a year or two, things can advance very rapidly.  

RS: Interesting that you use the term associate with this as a norm.  And you use that because?  

CR: Because yes, so, I think that’s partly having studied it, sort of international legal processes, there is a treaty, and so, there is a legal element of it, a law, that prohibits misuse.  But behind that, there’s a – I would say talk about a norm as well, because it goes just beyond the states that signed up to that particular treaty.  It’s well established in international, what they call customary law and state practice that this is a widespread, widely held norm that you shouldn’t misuse biology.  When I say long standing, it just goes back to a sort of turn of the 20th Century, 19th, 20th Century.  So, it’s there for that time.  And I think another point to pick up on in calling it a norm is I think this is something that really doesn’t only apply to states; it’s about the, say, scientists who would be involved in the biological warfare program.  They should also be aware that there’s this norm, and try and develop it within their kind of professional community.  

RS: In terms of the verification and enforcement, what are the issues?   

CR: So, the main issue is just a lack of verification, verification systems.  So, it’s biology, and this is the same with chemistry and with the nucleoside as well, is very dual use.  So, the same sort of research, the same sort of materials, the equipment you would need for, say, producing a vaccine might be very similar to producing a biological weapons agent.  And so, there’s a general feeling that when you have that, you need some process of verifying whether people are actually complying with the rules or not.  So, for the Chemical Weapons Convention, there’s inspection ratings where places will declare their chemical weapons production facilities, and then there’s an inspection team that can go and check, check that they were producing the amount they said they were and that it’s going to the purposes that they said it would be for.  There were attempts in the 1990s to develop a similar system for the Biological Weapons Convention so that these sorts of facilities – vaccine production facilities or pharmaceutical production facilities could be checked.  But that fell through in the sort of – well, it was around 2001.  

RS: So, going back for a moment, concerning risk and biological issues.  In terms of looking back over the history of biological – the production of biological weapons and potentially bioterror, what’s the history been like?   

CR: So, the – there’s been, again, quite a long history, but in the early sort of uses of biology as weapons, it wasn’t necessary disease mechanisms, we’re understood.  But things like in sieged warfare, throwing diseased cattle over the sort of castle walls, or trying to infect a water supply.  It’s not necessarily that there was good scientific understanding of the basis for that, but there were attempts.  So, then … I would say as you start to get a greater understanding of the scientific mechanisms, at each stage, that has been rapidly applied in terms of warfare. And so, I was going to say – yeah, so, there was no – no extensive use of biological weapons in either the First or Second World Wars, but there were developments of biological weapons that were produced and stockpiled by the major powers.  And Japan did make some use of the Chinese population during the Second World War.  So, it’s not that they’ve never been used, but the use has been quite limited.  But the stockpiles have been there.  Since the Biological Weapons Convention came into force, which was in the 1970s, there have been some programs I’ve known about that have gone on.  Again, these were seen to be rapidly applying scientific advances.  So, the Soviet Union’s program in the 1980s was applying genetic engineering. So, once that was found out, that program was shut down.  There’s no sort of definitely proven warfare program since then, but again, there’s suspected programs.  Iraq, for example.  

RS: Are these programs powerful enough to wipe out populations?  

CR: So, I would say, again, historically, that’s not necessarily been the aim. I think historically it was looking at them as a battlefield weapon, would be one thing, but also things like attacking your enemy’s agricultural systems.  So, again, in the sort of more – what we’d call the more traditional warfare situation.  You want to weaken your enemy.  Then sort of attacking their livestock or their crops would have been one say to do it.   So, that’s – actually, every major biological warfare program has had anti-animal and anti-crop elements to it.  

RS: Go ahead.  

CR: And then I think – are we thinking sort of bringing it forward?  Again, I don’t think any of the known biological warfare programs have really been aimed at a sort of wipe out humanity scale.  But it’s more to do with the fact that you could induce widespread fear.  You could demoralize a civilian population.  And so, I think there are scales of different – do you actually use it because you want to kill people, or because you just want to have these other effects on an enemy?  

RS: I was going to suggest a weapon of terror.  

CR: Yes.  And again, that could be a state using it, but it could be non-state actors using it for terror as well.  

RS: So, if you were the czar of governance, how would you shore this up so there was not only verification, but enforcement?  What suggestions would you make?  

CR: So, I think a lot of this is down to tracking better what’s going on inside some technology advances, because they are so rapid, and trying to work out some ways in which you can handle this dual-use element.  So, it’s – it’s very unlikely that we’re going to stop doing, say, research into pandemic influenza viruses, because we need that work for the health – for health benefits, public health benefits.  But then how do we – how do we keep that safe?  And I think a lot of that is less to do with the traditional forms of what we think of as governance and enforcement.  They would need things like punishment and laws.  But to do with, again, this kind of spreading the norms and having people like the scientific community behind the facts that, okay, there is a history to these things being diverted for misuse.  There’s a history of scientists being involved in biological warfare programs.  How do we, as a community, promote the fact that there are things we should be doing and there are things we shouldn’t be doing.  There’s a line to be drawn.  I think it becomes more difficult now to try and work out what you do. So, that’s sort of controls on material and controls on research practice.  But what do you do to control knowledge?  So, things are published online.  Most gene sequences are also published openly online, including of pathogens that have been used in biological warfare programs in the past.  So, that knowledge is out there, and it’s very hard to place controls on it, because as soon as something’s on the internet, it’s there.  And again, it has beneficial uses, and to what extent do you try and restrict it, which could damage the beneficial uses?  But yes, I don’t have the answers to kind of how to control knowledge and what should be done about knowledge that can be misused.  

RS: I heard – this is old news to you – but clearly, there was an issue in the United States with some research on a bio issue, it was bio weapon or I can’t remember, but a decision was made not to publish the findings to ensure that it would not get out on the internet.  Usual practice, or unusual practice?  

CR: I think unusual practice in that that’s not being done very much, a limitation on publication.  But from having sort of studied the sort of governance side of it for several years, it wasn’t a surprise that that would come about.  So, I think really since – and since about 2001, particularly with the anthrax attacks that took place in the U.S. at that time, there were a lot of studies done, particularly by the National Academies on what you would do policy wise.  And these were already picking up on the fact that there are certain experiments – they called the experiments of concern – where you may not want the full information to be openly published.  And so, there was contact with things like journal editors on how do you make these decisions?  So, there have been relatively few cases since then where a journal editor has suggested that maybe something shouldn’t be published in full.  But I think possibly the case you’re thinking of was some experiments on avian influenza in 2011.  And so, those eventually did get published, but there was quite a lot of controversy at the time, and there was a moratorium on certain research for a while.  And again, it was a difficult one.  But I heard even in that context, well, they’d already submitted the journal papers online.  So, if someone really wanted that, even if you’ve not officially published it, that information is available.  

RS: Within the larger – let’s go back, because I want to completely understand – within the larger context of issues like cyber warfare, cyber terror, climate, artificial intelligence, your arena of importance, where does it rank?  Where does it fall in?  

CR: So, I think … Yes.  So, there’s a contrast that I would make within that space as well.  So, I think if you’re talking about particular technologies – and I think that often is technological advances and things within biology, and increasing risks there – I think again that I wouldn’t see it as an isolated thing or event being the problem, but it’s interconnectedness.  So, just by itself, I think it might come lower than, say, general artificial intelligence, although that’s a further risk.  But when we think about it on a – you mentioned cyber security then.  When you start to think about the fact that so much information, biological information is now digitized, going through things like synthesis and sequencing companies, then if you couple that with a cyber security risk as well, you’re looking at larger problems or increasing risks.  I think things like climate change are interesting one.  And to me, they’re probably a larger threat than some of the technological ones.  So, climate change and things like overpopulation.  And what I say there is I think they get less attention because they’re kind of an ongoing risk there.  They’re already here, we’ve already seen some of the consequences.  But even things with climate change, they’re sort of what you’d call a tipping point into a more extreme climate change actually seems quite a high probability compared to a lot of these other existential risks.  So, yes, I would say those things are probably certainly at the moment a much higher risk.  

RS: I’m curious, this technology called the internet, which you referenced in terms of publication, clearly is also a Pandora’s box.  I mean, it’s out there.  There’s no way of putting the stuff back in.  So, how do we deal with the internet in the 21st Century with these particular risks, and specifically bio?  What do you do?  

CR: I mean, again, there’s a big interconnectedness problem there.  So, you could try – and obviously certain governments do try and shut down certain websites and prevent flow of information, but again, if you’re really a malicious actor, you can get around these things.  I would say things get so tied up in, again, the beneficial things to society, or not even necessarily beneficial to society, but things society is now so used to and a sense of freedom that comes with it that I think if a democratic government were to start trying to shut things down on the internet, there would be a huge pushback, even if there’s a strong justification for security or for a public health reason.  So, I think yes, those issues become very bound up and complicated.  And again, knowledge is a very difficult thing to control, and getting more difficult through these systems.  But also, it’s seeming in itself to become weaponized, I would say, when you think about fake information.  But yes, also, we can just skew what people are aware of and what they aren’t aware of.  So, I don’t know whether it becomes more of a risk once we have a lot more sort of artificial intelligence things running and algorithms running that just narrowly sort of giving people one section of information, or whether that in some ways could be a form of control in itself.  

RS: So, as you peek into the future, short-term future, let’s say, the next ten years, challenges in your specific arena?  What would you say they are?  

CR: I think in this arena specifically, it is getting states to recognize and resource some things that will get a more regular review of science and technology advances.  Now, I say states to resource it.  It may be that if states don’t, then there are things that can be done by the scientific community.  So, at the moment, groups like the National Academies, European Science Academies, the Royal Society, they are doing some of this work themselves.  So, next month there’s a meeting on security implications of genome editing that’s taking place and organized by some of these science academies.  So, yes, I think sometimes when states have a lot of different things to deal with, they’ve got limited amount of resources to dedicate to these things, there are ways these gaps can be filled.  So, I’d say that’s a kind of positive element of it.  But I think more generally, the problems we’re facing are the problems of the move from sort of concern about materials and equipment to knowledge and how we choose to deal with that. And this thing that maybe not within the next ten years, but the trajectory we’re on for individuals managing to have greater destructive capabilities in terms of biology.  

RS: I hate for you to forecast, but do you predict a bioterror event?  

CR: Yes, I think – but I would say yes, that there’s something – that’s something that would be fairly likely, but probably at quite a low level.  So, particularly because you just think of the motivations of terrorists at the moment do seem to be, again, to cause panic, to cause terror, not necessarily to wipe out huge numbers of people.  Particularly they may lose their supporters if they’re doing that sort of thing, and it’s also going to take them more effort to produce sort of more sophisticated biological weapons.  But yes, attempting some small-scale things that would have a big impact.  And I think that’s one of the things that we kind of mentioned, that it’s not that frequently that biological weapons have actually been used.  So, the sort of terror, the fear associated with thinking an outbreak is deliberate and what could they do next with that I think would be a big – a bigger thing than sort of number of deaths.  

RS: So, let me turn the question on its head.  Are you surprised that we really haven’t had a bioterror event?  

CR: Again, I’m not sure I’m surprised.  I think – I think conventional means of terrorism have just been so much easier.  And I think that’s partly it.  I think there may be that some groups feel there’s a taboo on using these sorts of weapons.  I mean, that’s been a past sort of thought with states as well, that maybe using poison and disease is kind of … Not the gentlemanly act of war?  That’s not quite the right way to phrase it, but you’re doing something secretive and sneaky rather than actually just getting out there and shooting someone. Whether that is still having any force, I don’t know.  

RS: Okay, thank you.  What have we missed within your arena?  

CR: I think we have focused a bit more on the deliberate misuse side.  So, I think there are still – well, still now, but more in the future, there will be risks of things happening accidently.  People not fully understanding the consequences of releases.  

RS: Yes, and talk for a minute or two about that.  I mean, what do you see – that’s happening in the laboratory?  

CR: So, there are things in recent years – I think the key example is the development of gene drive technology which allows traits to be spread deliberately through a population at a much higher rate than you would get naturally.  And so, that’s been developed for things like you could use this to modify a mosquito population so that they’re no longer carrying diseases.  So, again, it’s got these beneficial aspects to it, but it’s already been raised by the people who developed that technology that, you know, if this isn’t properly understood and it’s released, it’s irreversible in some senses. Certainly in its current stages.  So, unless we’ve developed something that would let us kind of reverse it as well, then perhaps we shouldn’t be releasing this at this stage.  I think that’s a very sensible approach.  But of course, again, the technological knowledge is out there of how to do this.  Is everyone going to take a kind of reasonable approach, or decide, well, we’ve got this big disease threat.  Let’s tackle this without thinking through the next steps.  

RS: Are people thinking it through, in your mind?  

CR: I would say not as much as they should be.  Or, I’ve pointed out there are certain people who have been involved in developing the gene drive technology, like Kevin Esvelt and Kenneth Oye who are – they are openly having conversations about this and calling for kind of responsible governance and responsible scientific practice.  But there’s not seemed to be a particular response in terms of changing domestic regulation and legislation on the uses of these organisms within laboratories and what would happen when they would be released.  And actually a lot of the legislation at the moment isn’t designed for those sorts of organisms. It’s designed for genetically–modified organisms that will sort of be – not self-contained, but self-limiting.  So, it’s not expecting an organism to be released that can spread, and certainly not in that kind of irreversible manner.  And yeah, I don’t necessarily think regulators have caught up to the urgency of that.   

RS: Define responsible governance.  What is that?  

CR: So, I mean, part of that is that – I’ll say that regulations just don’t seem to have caught up to the urgency of this.  Have they noticed what’s happening, and are they amending regulations so that it’s up to date so that scientists do know what standards they’re now supposed to use for these organisms, for example.  So, that’s part of it.  But also it comes down to this thing about there being benefits as well as risks.  So, there’s some concerns about not necessarily what’s in the wording of regulation at the moment, but the way regulation is applied, and particularly in Europe for genetically-modified organisms.  Actually, that’s doing a lot of damage in the sense of restricting that beneficial sort of products and things that could be put to market or could be used.  So, you know, novel foods or again, sort of novel organisms.  And there’s a question of whether that’s a responsible thing if you’re really restricting benefits for perhaps no scientific justification, and maybe for some political reason that’s not being articulated. So, that’s not to say it’s not legitimate to restrict things for political reasons, but if you’re not stating that’s why, then I would say that’s maybe not responsible in terms of governance.  

RS: The release of a dirty bomb within a population center.  Have you analyzed the risk of that?  

CR: So, that’s radiological?  

RS: Yes, radiological bomb, because it’s achievable within budgets of today’s terrorists.   

CR: Yes.  So, I haven’t, and I don’t think our center’s looked at that yet.  We do have a meeting coming up in December that’s going to.

RS: Can you just introduce yourself?  Name, title, and affiliation?  

MR: Yes.  I’m Martin Rees.  I’m a Professor at Cambridge University of Astronomy and Cosmology, but I’m also involved in the Centre for Studying Existential Risks in Cambridge University.  

RS: So, let’s begin with this lofty title of the Centre for Studying Existential Risk.  What is existential risk?  

MR: Well, I don’t want to go into semantics too much, but there is a new category of risks which are certainly going to be global and would involve all parts of the world and some sort of catastrophic setback to civilization.  And these kind of risks haven’t really existed up until now for two reasons.  First, the world’s never been as connected as it is now, so, even if something catastrophic happened in some parts of the world, other parts might be unaware of it.  We are so interconnected that anything really serious in one part of the world will cascade globally.  And the second reason is that this is the first century when human beings, one species, have the future of the planet in their hands.  And this is partly because there are more of us, we’re more demanding of energy and resources, and we’ve having an effect on the planet’s biodiversity and its climate, and this leads to the risk of tipping points.  That’s one kind of new threat.  The other kind of new threat is the empowerment of individuals or small groups by new technologies.  We’re familiar obviously with nuclear, which is 20th Century technology, but now new technologies – bio, cyber, and AI – are empowering us, and they have huge potential for good but also by error or by terror, they could cause serious catastrophes.  So, these are the kind of things which are on the agenda of our centre, and we are based in what I think that we’re claiming is the leading scientific university in Europe, and we believe that we can perform a valuable function, because we have the convening power to gather together the best experts.  And when we’re trying to look ahead, none of us can make reliable predictions, but I think if you can get some of the worlds’ best scientists together, using the convening power of Cambridge, we can make some assessment that should be useful to policymakers about which scenarios can be dismissed as science fiction and which are sufficiently real that we ought to worry about them and think about what measures we can take to reduce their likelihood.  So, that’s the central agenda that we have.  

RS: Looking back in history, 50, 60, 70 years ago, one would have thought that nuclear holocaust with the development of the atomic bomb was …  Existential risk was the complete story.  But in your mind, in the 21st Century, has it changed?  

MR: Well, the risk of nuclear catastrophe which we had in the Cold War era has somewhat abated, because the number of nuclear weapons in the world was then over 100,000, and it’s down by a factor of four or so since then.  So, even as there’s more chance than ever before of some nuclear weapons going off, there’s less chance of a global catastrophe of the kind that could have happened during the ‘60s or the 1970s. But of course, that risk is maybe just in abeyance, because we can well imagine that later this century, a new standoff between new superpowers could lead to a new arms race and be less well or less luckily handled than the Cold War was.  And let’s remember what the risk was in the Cold War.  The Cuba Crisis was one of the major concerns, and Kennedy said some time afterwards that the risk was between one and three in evens.  And McNamara, in his wiser retirement years, said that the U.S. was lucky as well as wise to survive it.  So, during the Cold War era, we were under severe threat of something that would have been a catastrophe for Europe and North America, and maybe for the world if it had triggered a nuclear winter.  And certainly a setback to civilization.  But of course, we now have these new sciences – biology and also cyber and AI –  which do empower people, and they open up a new set of risks.  

RS: Why is it that this century in your mind is so crucial in terms of existential risk, and how does it differ from the last century?  

MR: First of all, the world population is much higher than it was 50 years ago.  It was about three billion.  It’s now 7.4 billion, and almost certain to rise to nine billion by mid-century.  So, there are more people, and each of them is more demanding of energy and resources.  And indeed we hope that is going to be true, because we hope that the quality of life of those in the developing world will rise.  So, the pressure on energy and resources, the threat to biodiversity, etcetera, is far greater than ever before.  So, we are having a much heavier footprint on the planet with the risk of tipping points.  So, that’s one reason.  The second reason is that technology is much more advanced.  So, even a few people could create a bioweapon that could produce global catastrophe.  And this is why things are much more dangerous now than when we had just nuclear threats to worry about, because nuclear entities, whether it’s power stations or bombs, require massive special-purpose facilities, and therefore, it is not crazy to believe that one can verify disarmament measures and know what’s going on.  But in the case of, say, bio, the equipment needed is small-scale and dual use, available in many university labs, available in many industrial labs, and it’s very hard to imagine any kind of verification regime that could ensure that none of it is used for nefarious purposes or negligence. I think that’s bound to happen.  And although after developments like CRISPR/Cas9, the new gene editing technique, and the so-called gain-of-function experiments which show how you can modify a virus to make it more virulent and more transmissible, a kind of research incidentally which the U.S. Federal Government stopped funding in 2014 because of the dangers. Because of things like that, there are real dangers.  And whatever regulations we try to enforce in order to constrain the downside to these new technologies, the effectiveness with which they can be enforced globally is, in my view, minimal.  It is hopeless as enforcing the drug laws globally or the tax laws globally.  We’ve had precious little success at doing either of those things.  So, that’s what scares me most.  These techniques of bio will be misused.  And of course, some people will say, well, biological weapons haven’t been used very much by governments or even by terrorist groups with well-defined aims.  That’s because the consequences of bioweapons are unpredictable.  So, my worst nightmare is some fanatic who thinks that human beings are polluting the planet, so, let’s get rid of some of them.  Doesn’t matter who.  Someone like that empowered by [unintelligible] biotech could cause some sort of artificial pandemic, and that’s aggravated compared to the risk of natural pandemics that we have already.   
RS: So, if I hear you correctly, you’re doubtful that there can be any global legislation or guardrails to protect humanity from these sorts of threats?  

MR: Well, obviously we can, in various ways, minimize the risk and ensure that we have warning systems if a pandemic is starting, we need that anyway, and also we want to ensure that as few people as possible have the motivation to cause a bio-disaster by error or by design.  But my concern is that risk will always be significant, and these events are so potentially catastrophic that once is too often.  And this is a new class of threats which we can’t afford to happen even once.  

RS: Define tipping point, and what would that be, and are we near one on these issues?  

MR: Well, I mean, there’s always the discussion about ecological tipping points where the balance of species in a certain area is disturbed and we get a major extinction, and indeed, this is another concern, the loss of biodiversity.  But we start biodiversity, which is something which is important for our welfare, and we are all going to suffer if fish stocks dwindle to extinction, and then maybe the plants in the rainforest whose genepool is useful to us.  But more than that, many of us have environmental instincts and believe that the diversity of life on Earth is something which has value in its own rights quite apart from what it means to us as human beings.  And to quote the great ecologist, E. O. Wilson, if humans lead to mass extinctions, it’s the sin that future generations will least think of us for.  So, that’s a serious threat.  And other tipping points are, of course, discussed in the context of climate change.  Small climate change can be adjusted to, obviously, but to give one simple example, it’s thought that if the mean global temperature rises by much more than two degrees centigrade, this might trigger the melting of Greenland’s ice cap.  And once it starts melting, you can’t readily see how it could stop, and that would lead inexorably to a rise of sea level of five to seven meters.  So, that’s an example where if things get beyond a certain point, you trigger some sort of runaway catastrophe which is too late to stop.  And that’s incidentally one reason why we need to think about measures to reduce the rate of potential climate change.  

RS: Are the stakes higher today than they have ever been?  

MR: The stakes are very much higher because we have far bigger footprints on the planet collectively as human beings, and individuals and small groups have the power to cause global effects even if there are a small number of them, because we are in a connected world where computer networks can cascade a cyber-attack globally, where pandemics can spread at the speed of a jet aircraft, where panic and rumor can spread literally at the speed of light through social media.  So, we’re in an interconnected world completely different from what it was until 25 years ago.  And that’s therefore why no country could be isolated.  If there is such a catastrophe, it will cascade globally.  And I think society is more vulnerable to catastrophes like, say, pandemics, even natural pandemics, than it was in the past, because their expectations are high.  Think back to the Black Death in the 13th, 14th Century which killed off a third or so of the population in some towns and villages.  The rest just fatalistically went on as before.  But if there was some pandemic, whether natural or artificial, then I think once the number of cases rose to the level that overwhelmed hospitals, there would be social breakdown, because people expect that they get hospital treatment.  And so, I think there's a risk of social breakdown even when the number of cases rose to maybe 0.1% or something like that.  So, that’s the sense in which our society is fragile.  And we’re fragile also, of course, because we are interdependent.  I mean, we depend on international supply chains for many basic commodities, etcetera, and if they’re disrupted, then within a few days, there are big problems.  So, we are much more fragile, and we are more empowered.  So, there is definitely a set of issues which confront us know which did not confront us in the previous century, not even 20 years ago except for the nuclear one.  And that is why we need to deploy all the expertise we can draw on in places like Cambridge University in order to try and address these risks, to try and analyze which of the aspects we should worry about most, which aspects we can try to reduce the impact of, and judge its effects, whether we [unintelligible] some areas of research compared to others, and should this affect the order in which we try and do things.  All those are issues which require academic expertise and interdisciplinary activity of the kind which we feel that we want to contribute to.  And incidentally, it’s amazing how little effort is devoted to these major issues.  Huge numbers of people worry about small risks like plane crashes, carcinogens in food, low radiation doses.  00:15:39 On the other hand, there’s really only a few dozen people at most who are thinking about these extreme risks which may have low probability but have enormous consequences so that one occurrence is too many, and such that even if we can reduce the probability of such an eventuality by one part in a thousand, we’ve more than earned our keep.   

RS: I’m curious, because technology can give you both positive and negative.  I think of the internet as a tool over the last 20 years or so, since it is still a nascent technology, in its very early stages, that in many ways has unleashed these possibilities, whether it has given us the information, the context, the community to think about AI, to think about bio.  I mean, the internet has provided in some sense a bedrock.  I’m curious your take on the sense of – I mean, before you answer, I’ve had many, many conversations with Vint Cerf, one of the fathers of the internet, who said to me, had I known that this technology was going to do this, I would have thought about that, like cyber.  It was an academic tool, right?  So, I’m curious your take on the internet as a technology in unleashing some of these issues.  

MR: Well, it’s true, the internet has been utterly transformative in so many ways, hugely beneficial in many ways as we know.  We all use it.  I depend on it very much every day.  We all do.  But obviously, it does have its downsides.  We’re all aware of that.  Interconnectedness opens up new threats, and also new opportunities.  And of course, it is true that very little of this was foreseen by those who invented this and pioneered it.  And perhaps it could not have been.  And certainly the speed at which it took off could not have been foreseen, because to take a related example, the expert management consultants didn’t foresee, any of them, the speed at which mobile phones would penetrate globally so that in Africa and in India, there are far more mobile phones than toilets now.  They didn’t foresee what that would do for good or ill in these countries.  They didn’t foresee social media, Facebook with its two billion customers and all that.  They probably couldn’t have done.  But I think a lesson we learned is that it would be better if people had thought a bit harder.  If there had been groups of people who were there to brainstorm a bit more, and then they would have realized that there were certain things that we ought to try and guard against, and they might have planned things somewhat differently if they’d had a bit more foresight.  And I’m not saying we can’t foresee all these things, but scientists have a notoriously bad record of foreseeing things.  But I nonetheless think that if you gather together some of the best experts and set them to task of trying to explore scenarios, which are science fiction and which aren’t, and how can we optimize the possible ones and reduce their downsides, then that will make the world less unsafe than it would otherwise be.  So, that’s why I’m in favor of having these sorts of studies of extreme risks.  

RS: There is, though, an inevitability about the march of technology.  Do you agree?  

MR: I don’t think the advance of technology is inevitable.  It may go in fits and starts.  And certainly we have a lot of control over which branches are developed faster than others.  And to give other examples, I think when we consider the technologies involved in medicine and health, we’re going to face a new set of issues such as the extent to which we allow genetic modifications that lead to human enhancement and not just removing obvious disadvantages that some people have.  And also whether we want life extension, all these things.  So, the extent to which we allow free reign to these advances is something which ought to involve societal judgements, involving the great religions, and involving governments.  And that can clearly affect the rate at which these technologies are applied.  It’s very hard, as I’ve said in different contexts, to eliminate the risks that someone with ill intent will misuse these technologies to create a pandemic or something.  But we can control the ways in which they are applied to the whole population.  And I think there will be a great need to discuss the ethics of human enhancement and issues of that kind.  Another example where we can control the rate of technology.  

RS: Do you worry about putting the brakes on technology, about stalling potential development because of the enormous potential of upside risk?  I mean, how do you come down on that argument?  

MR: Well, obviously ideally, we want to harness the benefits of advancing technology and minimize the risks.  And I think if we look back in the past, then that balance has been positive.  Our lives today, despite all our problems, are, I think for the average person in the world, better than they were in the past.  If you make the sort of analysis of being a random person in the world, I think you’d choose the present over any previous era.  So, let’s not be too pessimistic.  But nonetheless, I think we are going to confront new threats.  And the other point is that even present technology is being exceedingly suboptimally applied, because we know even with present technology, we could alleviate the predicament of the bottom billion people in the world, and we’re not doing that.  And that indeed makes me pessimistic.  I define myself as a technological optimist but a political pessimist, and because we are so ineffective in avoiding famines, which are caused not by overall shortages of food but caused by wars or maldistribution, and in providing decent life for the bottom billion people in the world.  That makes me very pessimistic about governments being effective in dealing with the longer term ecological and climatic concerns.  Because if governments can’t cope with an immediate moral imperative, like feeding the starving in different parts of the world, they aren’t going to be strongly motivated to make sacrifices here and now for the benefit of people in remote parts of the world 20 years from now.  

RS: We saw culturally an early glimpse of artificial intelligence in the movie 2001 with HAL.  Is that our future?  

MR: Well, of course, HAL was, in a sense, a prescient concept of Arthur C. Clarke, because it was an intelligence which was rather human-like.  And of course, up until now, we have computers which are built as a special purpose task, but we don’t think of them as being like human beings.  There’s been a big development in the last few years in generalized machine intelligence where the machine learns without detailed programming.  That’s a difference between the way IBM’s Deep Blue beat Kasparov 20 years ago and the way the recent DeepMind computer beats the world champion in the game of Go, and Carnegie Mellon computer beat the world champion in poker.  In those latter two cases, the machines taught themselves.  They were given the basic rules, and they had the advantage of tremendous speed, and they watched hundreds of thousands of games. They played against themselves, etcetera.  And they improved to the extent that the people who wrote the program didn’t understand how they made decisions that led them to make extremely insightful moves in the game.  So, now the computers can teach themselves.  And you know, mundane and completely beneficial context, this is how the best machine translation is done now.  You don’t give the computer detailed semantics.  You just give it millions of pages of multilingual documents.  European Union documents, for instance.  Then if they get bored, they can look at billions of pages of those.  And that’s how they learn the semantics and the idioms in different languages.  So, computers now have the power to learn for themselves, facial recognition and things like that.  There’s still a long way from generalized human intelligence, because you still can’t make a robot that can move the pieces on a real chessboard as adeptly as a child can, even though it can play very well.  And it’s still true, as Kasparov himself has said, that a combination of human plus machine is better than either separately.  So, the role of humans has not been eliminated. 

RS: Pick up the last thought with humans teaching.  Go ahead.  

MR: So, computers clearly have the capacity now to learn for themselves in a way they couldn’t 20 years ago, and generalized machine learning has been used to beat the world champions in Go and poker, in a different way from the way IBM’s Deep Blue was able to beat Kasparov, because in that case, expert chess players did the programming.  But now, the machine learns itself, and that’s a big difference.  It’s a step towards generalized machine learning.  But still, it’s a long way to go before robots can interact with the physical world as adeptly as a human being can, and of course, they learn to recognize faces by looking at millions of images, which is certainly not the way a baby learns.  So, they’re not actually reproducing the way humans learn about their environments.  But, of course, there is the possibility – and this is what some people worry about – that a very intelligent machine, which has access to the internet of things, will be able to interact and do things which are unpredictable, just as we can’t predict the moves made by an expert game-playing machine.  And of course, this would be risky, and this is the sort of thing which gets us towards the HAL-type computer.  But I think this is going to be a long-term threat.  There are some people who, of course, think that we need to worry already about AI and that the development in that area needs to be regulated in the same ways everyone agrees biotech needs to be regulated.  Whereas others feel that that’s still too futuristic, and for the next decade or two, it’s not artificial intelligence but real stupidity that’s going to be the main threat to us.  I think at the extreme of predicting rapid AI, there’s a chap called Ray Kurzweil who now works for Google.  At the other extreme, Rodney Brooks, a robot maker in Boston, who made the robot vacuum cleaner, he thinks we’ll never have human-level intelligence.  There’s a big spectrum in how quickly, if at all, people think that human intelligence will develop.  If you asked me for my scenario, I think that during this century, we’ll have huge developments in biotech and probably in cyborg technology.  But I think we will try to regulate them here on Earth.  And this links to my interest in space exploration, because robots and AI have a huge role to play in space.  We’ll have miniaturized probes exploring the solar system.  We will have robotic fabricators building big structures under zero-G in space.  All these things will happen.  And these will reduce the need to send humans into space.  But I think humans will still probably go into space, probably even to Mars by the end of the century.  But they will go as adventurers, as an extreme sport, accepting high risks.  Probably funded privately by companies like SpaceX or Blue Origin.  And I think if that happens, then I think that we should cheer them on, because although they will be sort of crazy adventurers, normal people will never go.  It’s a dangerous illusion to think that we can have mass immigration to Mars to escape the world’s problems.  But a few crazy pioneers will go, and they’ll be cosmically important, because they will be away from any regulations, and they will therefore have every incentive to use genetics and cyborg and robotic techniques to adapt to a very hostile environment.  So, post-human evolution, sort of intelligent design taking place on a technological timescale, now the Darwinian time scale, will start away from the Earth, people like that.  And that’s the far future.   

RS: Give me the upside of AI.  I mean, you’ve talked about space as an example.  Are there other upsides in your mind?  

MR: I think AI has huge upsides, probably in the short and medium term for machine translation.  It’s wonderful if you can talk on your phone in one language and it comes out in literate speech in a different language.  Things like that can be feasible.  And it’s probably clear that large networks and systems like electric power grids and transport networks can be coped with better by AI than by human controllers.  So, things like that are going to be net benefits from AI in the next decade or two, certainly.  And that’s going to be good news.  But there are certain concerns that arise if the AI has access to lots of data, if it is making decision that affect us as individual human beings, because it’s normally thought that if something is done to us, if I’m put in prison or something like that, then we should be given some reason that we can understand.  We should not have something done to us to our disadvantage as a result of some algorithm which can’t explain itself.  And that’s the risk which will come if we have these algorithms.  And what we need to worry about is that even if the algorithms seem to be battle tested and have a fairly good record and make better judgements and make fewer mistakes than we do in things like not just blue-collar jobs but in jobs like legal work, medical diagnostics, etcetera, we want to feel that we’ve been given a reason by a real human being before something happens to us which we would complain about.  And the trouble is that these machines which have learned themselves can’t necessarily explain themselves, and their owners can’t explain it either, because they’ve learned in a way that is autonomous, really.  And these are the kinds of problems they’re going to face.  

RS: Is HAL a possibility in the future, in your mind?  

MR: The idea of a computer which sort of takes over, I think – I think no one would say it’s impossible.  How likely it is we don’t know.  Some people think this may happen.  Some people think it will never happen.  But I would have thought the risk is such that we ought to think about it, and also in thinking about it, ensure that when we develop AI and do R&D to make it more powerful, we are mindful of these possible risks and try and avoid them.  And people have already discussed – and this is something we’re discussing in Cambridge – how one can minimize the risk by doing things in a certain order to avoid some sort of runaway and to keep control.  So, I think there are risks.  We can’t eliminate them even though I am on the side of thinking that it’s rather remote in the future rather than being a fan of Ray Kurzweil.  

RS: In the early age of technology, one hoped for just a reboot or an on/off button.  Is that enough in the future for you?  

MR: Well, it may not – it may not be, because we may not know what’s happening until it’s too late.  So, that’s the reason why there is a genuine reason to worry about some of these technologies, at least to the extent of focusing on the more distant technological developments to a greater extent than is done now.  And this is done in academia better than any industry.  

RS: How do you rank these risks?  I mean, you’ve mentioned bio, cyber, AI, climate environment.  Is there an order in which you are most concerned?  

MR: Well, at different time scales.  In the short term, I am very worried about the misuse of bio, and also I’m worried about the misuse of cyber techniques which already, of course, are disruptive, and could be extremely serious on networks, national power grids, and things like that.  So, those are short-term threats.  I do worry in the long term about despoliation of our natural environments, loss of biodiversity, and climate change going over the tipping point and therefore getting out of control.  I worry about all of those things.  Incidentally, geoengineering is something we’re discussing as a way of coping with those.  But I worry about those in the long term.  But I think although I’m a technological optimist, I’m a political pessimist, because I think the hard thing is going to be to ensure that we derive the benefits of these technologies and minimize the risk of these downsides.  

RS: Will these, in terms of AI in these I guess what are called thinking machines of the future, be able to make moral and ethical decisions in your mind?  

MR: This is, again, something we’re thinking about, which other people, including Murray Shanahan at Imperial College has thought quite deeply about, and also Stuart Russell at Berkeley.   And people argue that you’ve got to teach these machines common sense, for one thing, and realize that they don’t naturally have any common sense.  Take a simple example, it’s often given if you have one of these robots that makes sure your fridge is full and cooks your meals and things, you’ve got to make sure it realizes that if it’s run out of meat, it’s not appropriate to put your cat in the oven.  There’s no reason why it would have any realization that we treat our cat differently from a piece of meat.  So, that’s an example of where you’ve got to make sure that things which are obvious to human beings are somehow programmed into these machines.  Now, some people think that by observing human behavior, machines can eventually absorb ethics and everyday common sense, as it were.  Some people believe that.  Others think this is difficult.  But that’s a sort of serious question, the extent to which a machine could actually learn how to behave so as to avoid doing things which are obviously contrary to what any human being would think was appropriate.  That’s something which – but then of course, there’s a much deeper question, which is a philosophical one, of the extent to which these machines will develop consciousness.  I mean, some people think consciousness is something which is peculiar.  It’s just wet hardware in human skulls.  Others think it’s something which emerges when you get up to a certain level of complexity.  And this is related to the question of whether we will develop human-level intelligence by electronic means, by just more and more powerful computers, or whether we can learn by understanding more about how the brain actually works.  Some people think we should reverse-engineer the brain.  Others think that’s as pointless as believing we can learn best to fly by seeing how birds flap their wings.  So, there’s a big debate there.  Another debate is whether a very elaborate intelligence which is incorporated in electronic form will become conscious.  00:39:04 Some people say that consciousness is an emergent phenomenon above any kind of complexity.  We don’t know.  And of course, it’s hard to check, because lots of philosophers have pointed out that we don’t really know that other people aren’t zombies, really.  We just assume they’re not, but there’s no direct way we can.  And John Cerr and a lot of philosophers have addressed this sort of question.  We don’t know.  But it is important, because if the robots became sufficiently human-like [unintelligible] related to them, then if we think they’re conscious, then we should take their feelings into account.  For instance, we believe it’s an obligation to ensure that other humans can fulfill their natural potential.  We feel the same towards certain kinds of animals.  So, should we therefore be concerned if our computers are underemployed or bored?  If we think they’re conscious, we should.  If not, then that’s irrelevant.  And that’s a philosophical question which now it’s science fiction, and we are familiar with science fiction movies where people assume that these machines are conscious, but we don’t know whether that will happen.  And that’s an important question which we’ll have to address which will affect the way we treat them.  00:40:24 And also affect whether we are happy for them to take over the roles which we traditionally thought were appropriate only for humans, like carers for old people, young and old.  You know, the Japanese talk about having carers for old people, which are robots, and I can see that that may be sensible for everyday things, things like sort of help with dressing and feeding, etcetera.  But I think it’s demeaning to think that a human being can be looked after by a robot and not another real human being.  And I personally think that when the robots take over more and more types of employment, earning huge amounts of money, that money should be redistributed by high taxation to the government so that very, very large numbers of dignified and properly-paid posts can be established for carers for young and old, gardeners in public parks, custodians, and jobs like that, which at the moment are far too few in number and far too lowly appreciated.  So, I think the jobs like that which should be publicly supported using the tax revenues raised from the robots, as it were.  And that’s far better than just giving people a living wage for doing their work.  They’ve got to feel they’ve got dignified employment.  And there’s plenty of that, because I think rich people when they’re old would want some people to look after them.  We should aim to provide that for everyone, not just the rich.  

RS: So, your assumption is that AI will have a big impact on work of the future?  

MR: I think most people think it will have a big impact.  Just how big is debatable.  But I think the important point is that its impact will not be solely on blue-collar job.  Indeed, jobs like gardening and plumbing will be among the hardest to automate, but it will replace jobs like routine legal work, computer coding, medical diagnostics, radiology and things, and surgery.  Many jobs like that will be replaced.  So, it’s going to be a disruptive transition in the labor market.  And of course, some people say, well, all the money earned by the robots should be used to give everyone a living wage so they can all be artists.  That’s fine, but I think there will be lots and lots of work that could be done by humans if it’s properly paid and respected.  Like looking after young and old and being ministers of religion and all these things.  

RS: In attributing, as you said, some of these human traits to a robot through AI, it begs the question of what does it mean to be human in the 21st Century?  

MR: Well, I think we know what it means to be human, and we are dependent on machines in a different way from before, and the way we live our lives and apportion our time has been hugely changed by modern technology, most particularly by social media.  And again, social media and mobile phones and the way that they are sort of an adjunct to us was not predicted, and certainly the speed at which this transformation occurred was not predicted by McKenzie and these other sort of highly-paid consultants.  They were completely wrong.  And this is an example of something which has transformed society, on the whole for the better, I would say, but in a very serious way, very, very quickly.  And I think if we consider technological advances in general, then it’s easier to forecast a direction of travel than the rate of travel.  In the case of mobile phones, it was perhaps not too difficult to predict what was going to happen, but the speed of the technology and the speed of the global adoption, even in poor parts of the world, was not predicted by anyone.  In contrast, think of supersonic flight which was pioneered in the 1960s for civilian use but never caught on, because there was no great demand for it.  That’s an example of a technology which could have developed very fast but hasn’t.  So, some technologies develop very slowly or not at all whereas others develop at a rate so fast that we worry about whether we can cope with them.  

RS: You’re pointing also to a term that is being used more often called transhumanism.  Is that part of our future?  

MR: Definitely I think it may start from a few crazy pioneers on Mars rather than Earth, but I think one important point which perhaps astronomers realize more clearly than most educated people is that the future stretches ahead far further than we can go back in the past.  Just to expand on that, most educated people are familiar that we are the outcome of nearly four billion years of Darwinian evolution on Earth, but most of them somehow think that we humans are the culmination of it all.  And no astronomer can believe that, because astronomers know that the sun is less than halfway through its life.  The sun has six billion more years to go before it runs out of fuel and flares up and engulfs the inner planets.  And even beyond that, the expanding universe may go on forever.  I like to quote Woody Allen who says, eternity is very long, especially towards the end.  So, the time for future evolution is longer than the time for past evolution.  And more than that, future evolution I think will be not Darwinian.  It will be a kind of intelligent design which will be machines and maybe humans modifying themselves, and therefore changing on the technological timescale far faster than the million-year timescale for Darwinian selection.  So, billions of years ahead where things will change far faster, and where therefore we can’t conceive of what things will be like.  And it’s ironic, really, that although our horizons have hugely expanded in both space and time from our medieval forbearers, they were fairly confident that a century onwards, things would be the same as they were then.  We can’t predict what things will be like in one or two centuries’ time, even though they thought the future was a few thousand years, and we think it’s many billions of years.  So, I think we should think of human beings as anything but the culmination, and the key question is, will future evolution be mainly electronic and inorganic, or will it be organic creatures like us, maybe very different from humans?  We don’t know.  But I think the point is that even in this time scale of four billion years evolution in the past, even more in the future, this century is special.  And this goes back to what we’re doing in Cambridge, because it is the first century when one species, ours, can determine that future.  We can determine whether there is a future which may lead to transhumanism where we may be eventually superseded by entities quite different from us with our deeper powers of thought, or by screwing things up, we could foreclose that future. So, the stakes are very high.  They’re higher this century than ever before.  So, even in the perspective of the 40 or 45 million centuries of the Earth’s history, this one is very special.  

RS: A couple of last questions.  Should there be a Hippocratic Oath for those who work on AI?  

MR: I think if you asked a lot of medical practitioners, they would disagree about how much value is added by the Hippocratic Oath as a formal oath.  But I think it is very important that all scientists who work on subjects with any societal impact should be concerned about that impact.  Of course, scientists have no special expertise outside a particular area.  They have no special ethical sensitivities.  So, decisions about how science is applied should not be made just by scientists.  They should be made by the wider public.  And this has two consequences, I think.  First, scientists should do all they can to enlighten the wider public about science, because it’s impossible to have a serious debate about any policy issue with the scientific dimension, be it energy, health, or future technology, unless people have some feel for science.  So, scientists have an obligation to make sure that the public is informed at least in outline about the choices that society has to make.  And that’s important.  But I also think, over and above that, scientists have an extra responsibility, that responsibility to forewarn the public of downsides, but they should try all they can to ensure that their work has benign applications and to try and avoid the dangerous applications.  I think I was privileged when I was younger to get to know some of the really great individuals who were involved in Los Alamos making the first atomic bomb.  And many of these people returned to civilian life with relief after World War II.  00:50:56 But they maintained their concern.  They thought they had an obligation to do all they could to harness the powers they had helped unleash.  And great men like Joe Rotblat, Hans Bethe, etcetera, who I was privileged to know, they devoted a good chunk of their remaining lives to trying to do this.  Not always with success, but they felt they had an obligation to do all they could.  And I think that attitude is something which the Los Alamos atomic scientists had, and I think it is needed in other areas of science now.  They need to be concerned, but to realize that the decisions should be made not by the scientists but by the wider public, because scientists shouldn’t play God.  The wider public should have a say, because the decisions are not purely scientific.  They are ethical and prudential as well.  So, I would say that scientists have a particular role, but they should realize that they are not people of any special status outside their particular expertise.  

RS: Because of the societal impact?  

MR: Yeah.  

RS: So, last question, although I have 100 more, but I’ll end on this one.  So, why’s a nice cosmologist and astronomer like you worried about existential risk?  Why?  

MR: I think it’s extremely important.  I think it’s just one of the most important issues that are confronting us in the rest of this century, and it is extremely understudied.  The number of people studying extreme risks is tiny compared to those who are studying more everyday risks like radiation doses and carcinogens and things of that kind.  So, I think it is very important, and I feel that I’ve been fortunate to spend much of my life studying astronomy in Cambridge University, and I’ve come to realize that Cambridge University is a very special place.  It’s not only the leading center for scientific research in Europe, but also it’s a place that foster interdisciplinary contacts, and therefore, I’ve been in contact over my career with people in different fields and have therefore felt it appropriate to try and draw these people together so that we can provide expertise for the public, public campaigns, and for government.  I think it’s very important that we should do this.  And I would feel negligent if I didn’t.  And I should say also that if I look at how my career’s gone, I was an academic researcher and teacher in universities for most of my career, but in my later career, I had opportunities to get involved in policy areas.  I was president for five years of the Royal Society, which is Britain’s equivalent to the National Academy of Sciences.  And I was head of the biggest college in Cambridge, and also I’m a member of the House of Lords, which is part of the UK Parliament.  So, I’ve had this opportunity to get involved in policy questions and broaden my perspective.  And I realize from my wider contacts how much need there is to enhance the amount of thought that’s given to these longer-term issues.  Because politicians tend to focus on what’s local and what happens on time scales up to the next election.  And in fact, Juncker, the head of the European Union, said politicians often know what’s the right thing to do, but they don’t know how to do it and get reelected afterwards.  So, we are up against that.  And in order to get politicians to do what’s needed to be done to remove these long-term risks or to reduce them, they’ve got to think longer term and more globally.  And they’ll only do this if pressure to do so comes from their inboxes and from the press.  Because they do care about what’s in the press and what their electors say.  So, that’s why I think outreach by scientists is important. I know people who have served as scientific advisors in government.  They’re often frustrated because the politicians naturally have so much on their urgent agenda they don’t do this.  But if you can build up a strong movement in the public, then that can have a big effect.  And indeed, if you look at movements through history, from slavery into rights for all races, Rachel Caron’s work, gay rights and all these things, they started with public movements, which politicians have then taken on board.  And one other thing I’ve been involved in, which I think indicates how we can have leverage, is through being a council member of the particular Academy of Sciences, which organizes meetings on scientific issues of societal relevance.  In particular, it organized in 2014 a very high level scientific conference on sustainability and climate with the worlds’ best climate scientists plus economists like Jeffrey Sacks and Joe Stiglitz. And this was important [unintelligible] into papal and cyclical on climates which in itself I think had a very big influence on forging a consensus at the time of the Paris Conference, because whatever you think of the Catholic Church, no one can deny its long-term thinking, its global range, and its concern for the worlds’ poor.  And it had a huge impact especially in Latin America and in East Asia, and in Africa, if not in the U.S. Republican Party.  So, it was very important in a lead up.  And what we need to do is to try and maintain that momentum, because at that time, climate did briefly get fairly high on the agenda, and we want to do this for climate, but also for the other concerns in the long term which may cause us to change our lifestyle a bit, but which are important if we are to survive this century and flourish in it.  

RS: Thank you.  That was great.  I appreciate it.  That was excellent.