Transcript of journalist and senior media executive Richard Sergay's interview with Rt. Revd. Dr. Steven Croft for the "Stories of Impact" series.

Watch the video version of this interview

RS =  Richard Sergay (interviewer)
SC = Steven Croft (interviewee)


SC: Thank you, I’m Steven Croft, I’m the Bishop of Oxford, and I’m in my home in Oxford at the moment. 

RS: The Templeton grant and what it supports?

SC: The Templeton World Charity Foundation grant, supports and funds a full time research and parliamentary assistant for my work in the House of Lords and more widely on artificial intelligence and data ethics and science related questions. Says Simon, my assistant, is based half-time in Oxford to support my work locally here and half-time embedded in the parliamentary unit of the Church of England. So he is also able to resource the other 25 Lords Spiritual and others working on the areas of artificial intelligence in the Church of England and more widely.

RS: Why Templeton, why important?

SC: The subject of artificial intelligence and data ethics is one of the great questions of The Age and The Church needs to engage and reflect on that as technology comes to a real crossroads for the future. And as I am leading on that for the Church of England it became important to have a wider resource to draw on and the Templeton Grant enables me to employ somebody to resource that part of my work and through that to resource the wider church’s thinking on these issues.

RS: How did you begin to explore the world of AI?

SC: I began to explore it first of all through one of my sons who works in the digital industry in computer games. And he feeds me interesting books to read. And in one of them, on AI, a paragraph leapt out at me that one of the results of artificial intelligence was going to be that as human beings we were going to need to reflect on our own identity in relation to intelligent machines and that that was going to be a focus for us for the next 30 years. My insight is, as a Christian minister, that the Christian church has got really important things to say in the area of what it means to be human. And therefore we should be involved and contribute to that debate. I am one of the bishops who sits in the House of Lords in the United Kingdom and that chance came to be involved at that point in an all party parliamentary group and then in a much bigger enterprise, a House of Lords select committee inquiry into artificial intelligence, which took a very broad look at the development of AI in the UK and at the digital industry. And having been through that experience I was then invited to join the board of the Center for Data Ethics and Innovation and have continued that exploration over the last 18 months.

RS: How do you define AI and how has your understanding changed?

SC: I would, I mean there are lots of different definitions of artificial intelligence, but I find the definition of broad and narrow, helpful. Broad are general artificial intelligence and the rise of conscious machines was what first intrigued me and drew me in, in the literature. And I suppose what kept me awake at night in those early months was the idea of intelligent machines which would take over the world of science fiction. I discovered that that is still many years away probably, and we may never get there. But what then began to keep me awake at night and intrigued me was the rise of narrow artificial intelligence, the way AI is being used now in all kinds of ways, sometimes regulated, sometimes not very well regulated, and that seems to me to present a clear and present danger to the flourishing of human life and to justice and fairness. So the ethical questions around narrow AI, I found really interesting and intriguing, and then over the last year, the more I’ve explored the potential for AI, the third dimension which has begun to intrigue me, is the whole opportunity that the development of AI represents for human life and health and well-being. So the moral dilemmas of not making use of this technology have also emerged alongside the dangers that it presents. 

RS: Define human flourishing?

Human flourishing for me means to have conditions in which human beings live creatively and purposefully and well, form good relationships founded on principles of justice and fairness and equity, able to work and get a return for their living and practice freedom of faith and, and other things that obviously as a Christian, human flourishing is also about engagement with God and with Christ.

RS: Now make the connection with AI? 

I think human flourishing connects with artificial intelligence in two ways. First of all, some of the ways in which artificial intelligence is beginning to operate could inhibit human flourishing in areas such as work and what’s happening to work and working for robots in the gig economy begins to erode some aspects of human flourishing and self-determination. And in other ways artificial intelligence can enhance and support human flourishing, particularly in releasing people from onerous labor, in driving economic productivity, and in the health benefits of AI.

RS: AI and data ethics, what’s so important?

I think they’re critically important for the whole future of humanity, really. The technologies have now reached a point where they’re not just for science enthusiasts or the realm of science fiction. They’re really beginning to impact everyday life and our economy and we therefore face, as human societies, really critical decisions about whether and how we will manage those technologies ethically. If we don't, there will be really significant consequences and also opportunity costs for human flourishing in the future. And if we don’t engage as modern liberal democracies with these technologies and how they’re governed, then we’ll find either they’re driven by the marketplace and large multinational tech companies or totalitarian states in terms of the levels of investment that China is making in AI and the whole world is going to be shaped by the way we respond to this kind of technology in the future.

RS: Are you suggesting we think ahead and begin regulation?

SC: I think there is a need to balance ethics and innovation, really. And so it’s not always a matter of regulation. But I think ethics always should be present and always need to be thought about with new technology. And regulations sometimes we need to follow from the ethics and gov— good governance will need to follow from it as well. So, one example is the use of technology in combating the COVID crisis of the present, a number of countries are developing in different ways, contact tracing apps which store data in a particular way. They hold out potential benefits for all of us to learn to live well and reemerge from lockdown after a wave of the pandemic. But they also carry dangers in the way that data is shared in whether or not they work effectively in the number of false messages to isolate that come. And so those technologies should not be developed just by scientists or technicians. They shouldn’t just be deployed by— tech companies. There needs to be good ethical governance, good public engagement, and good engagement by the whole of society, in order to determine how that technology is used and rolled out. That’s essential to get the benefits of the technology but also present the harms that could come from it.

RS: The ethical connection to AI, who makes the decisions, whose ethics are they?

SC: Well I think that in itself is a very critical question. I think in the development of technology in the West over the last 15 or 20 years, those decisions have increasingly been made by the developers of the technology and the concentration of power and resource in a very small number of global technology companies, mainly coming out of the United States, has become a really dangerous thing I think for the world. And I think over very recent years there’s an increasing awareness of that through the controversies around Facebook, through some of the controversies that have arisen within Google. And governments have begun to become involved in that, in the regulation of the flow of data, and also in the ethical governance of technology. Another area which is at the Center for Data Ethics has been engaging with, has been the use of data in predictive policing, experimentally used by some of the police forces in the U.K. and the question of how that should be governed, should it be shaped only by the tech company which is supplying the technology, should it be shaped by the police force alone. Is there some way of introducing an ethics panel and standards of good governance into a way that is developed to mitigate the risks of bias creeping into very critical decisions about police deployment and operational guidelines.

RS: Unintended consequences?

SC: I think globally, the understanding of how to do it, is still developing. So no one government or set of governments has the answer. And there’s interesting things developing within the European Union and Britain, in Canada, in the United States and elsewhere. It’s clear that a whole approach to ethics needs to be developed within a company and within government, really. So as an early stage of this, a lot of companies and indeed governments and intergovernmental panels have developed high— high level ethical statements as setting out the five principles which should govern the use of artificial intelligence, the OECD has one, our House of Lords had five principles, they are all broadly as similar and they’re good, they’re a good beginning. But actually, if you don’t press that any further into genuine public engagement and good governance and regulation, it remains what some people term ethics washing, the appearance of ethics without really making a difference. I think there’s also a need for granularity and sometimes that means regulation. Sometimes it means codes of practice which are not necessarily supported by law. I think it always means good public engagement. A critical thing for the development of AI and data driven technologies has been demonstrated over and over again to be building public trust and confidence. If the public have trust in something, then they’re likely to pick it up and it will flourish and run and its benefits will be exploited to the maximum. If the public— if the public began to mistrust a new technology, then the benefits could be set back and the harms could be greater.

RS: The five principals, the OECD?

SC: So the— the OECD is the— a 20 nation coalition, of sorry— I have to look up one that actually stands for. It’s the Organization of Economic Co-operation and Development I think and the five principles are that I should benefit people on the planet by driving inclusive growth, sustainable development and well-being. So, ,so that’s a kind of AI for good and for the common good headline which I think is really, really important. AI systems should respect the rule of law, human rights, democratic values and diversity, that really critical and appropriate safeguards at transparency and responsible disclosure around AI systems, which I think is really important so that people understand if, if a, an algorithm is making a decision about whether you should have a bank loan or not, or why you haven’t been shortlisted for a particular job, both of which are happening every day now. That people should be given reasons why that decision has been made. Systems must function in a robust, secure, and safe way, so that risks are assessed and managed, and organizations and individuals developing or deploying AI should be held accountable for what they’re doing. So we now have situations in the UK where artificial intelligence and data driven algorithms are being made— are being deployed to make decisions about really critical areas of social care, of probation and release from prison, about assessing risk in different areas. And it’s really important that those decisions are well made, that they have public confidence, that they’re not using data which is biased in any way, and constantly reassessed. And also that the bodies using the AI are held accountable.

RS: How did these principals come about and how enforceable are they?

SC: Well, the OECD principles came about through international networks and governments meeting to agree to them. The European Union has a similar high level ethical set of principles. The British government is a signatory to the OECD principles as are the majority of developing nations and the challenge, and I think there are very good set of principles. The challenge now is to translate them into a good ethics and governance in areas like policing and health and social care and also (DING) private technology and the use of social media and so on.

RS: How do they get enforced?

SC: I think through a mixture of legislation, and developing industry good practice. But it’s impossible to invent, invent one set of regulations and one regulatory body to operate across every different sector with the common element being AI or data driven. But these technologies are deployed in sectors which already have their own ethics and code of practice. For example in policing or in social care, and it’s the bodies which are responsible for ethics in those particular areas which need to pick out the particular challenges and receive expert guidance and support in doing that. And I think the way the Center for Data Ethics in the UK will develop, is as a provider of that kind of advice and expertise in an interdisciplinary way so that the individual sectors take the ethical concerns forward. So, the common elements will always be public engagement and trust and paying attention to good governance. I think there are some sectors, particularly at the social media companies, where the sector itself is new, that there has never been as something like Twitter or Facebook prior to 10 or 15 years ago. [And the challenges of governance in those sectors are greater because of the ethical undergirding that has not had 200 years to develop as it has with policing or even longer in the case of healthcare. So I think the challenge for those industries is greater and also the awareness of harm which is resulting from some of the exploitation of those technologies is newer, because the technology is new. So the UK government is currently considering an online harms bill which is seeking to mitigate and to learn from some of the evidence of online harms and to regulate. And that’s where new legislation is genuinely needed.

RS: The Center for Data Ethics and Innovation is…?

SC: The United Kingdom government had determined several years ago that artificial intelligence and data technology was going to be a principal challenge and driver of the UK economy into the future. And therefore set up three new institutions to support and develop that. One was the AI Council, which brings together universities and government and industry to take forward the development of AI. And a second is the Center of Data Ethics and Innovation. And the CDI is currently situated within the government. The idea is it becomes an independent body in due course. It has a board of 13 people drawn from civil society, at the industry, at the universities, and regulators, to work at this question of how regulation is introduced in this space, which preserves both ethics and innovation. The idea is not to inhibit innovation in any way because that’s what we need going forward, but to have innovation which benefits people as we go forward. So it’s been a very interesting journey over the last 18 months as we’ve looked at what is a rapidly developing field. We’ve done three principal pieces of work so far. One is a study of online targeting in different areas. One is a study of bias in algorithmic decision making, and the third is a horizon scanning exercise and AI barometer, which is a survey of what the ethical issues are likely to be in five different areas of artificial intelligence development over the coming year. It’s an absolutely fascinating adventure to be part of and a really interesting conversation between ethicists and philosophers and regulators and the technology industry which then is empowered to give advice to the government and more widely. 

RS: Those five areas are what?

SC: Financial technology, at law, at human resources, at health, and — the fifth escapes me, but five really critical areas which are being surveyed about the way forward.

RS: In those areas in particular because?

SC: I think that’s where we see the most rapid development as sorry, social media was a fifth. Financial technology, law, human resources, social services and social media will be the five areas. And we see those as being the areas of the most significant development in the next five years and therefore where the most thought needs to be given, to the way ethics and good governance can be applied to the new technology. 

RS: What does the ethics board look like?

SC: I think our picture of that is gradually emerging. And we’re learning as we go. And so I don’t think there’s a blueprint yet, but some elements are clearly very important. One common feature is, is always the engagement of the wider public and civil society. Because all of these technologies are potentially of massive impact. So, keeping a public consumer citizen facing awareness is really critical. A second is to have some representation from science and technology, because obviously that’s essential to know what is possible and what isn’t and what the key choices are. And the third would be tapping into a broader and deeper ethical tradition from philosophy, from the faith communities, the ethical resources which underpin our democratic institutions and which underpin our global institutions are really important in being brought to bear on these questions. And that’s where the high level principles interface with the detail of regulation and good practice. And then a fourth partner in the conversation are those who’ve got some experience of public standards and regulation and the drafting of those. So I think that for partied conversation is very good practice for the way ethics boards are developing. And the other element to put alongside that, is ensuring that the ethical regulation is not simply a set of fig leaves to cover any kind of practice within the organization or institution. And that means the ethical representation needs to be at very, very senior, at the most senior level, within the board of the company, not hived off into a separate ethics board. Although you may need a small group to do some detailed thinking. And it also has access to the granularity of the regulation and practice within that organization.

RS: Inherent bias in algorithms?

SC: I think-- I think that’s, like most of these areas, quite complex at one level but not at another. I think there are generally accepted principles of fairness in decision making, even though it’s not always possible for everybody to have a transplant, there are emerging criteria for making those decisions in ways that are fair. The question when AI and big data are deployed in supporting those decisions through algorithms. What are the things that make those decisions then fairer than they would be if they were made simply in, in human decision making processes. I think a number of things are emerging. One is that the datasets themselves have got to be as accurate and fair as possible and there are some examples of bad datasets being used to train algorithms, which then produce worse decisions. The second one is keeping some human engagement in the process. Now there is a, something of a debate around this, because there is a paradox that the more you use algorithms and accurate data, the more and the better the decisions seem to be. but there is an element to our humanity and I would argue that it is lost if decisions about people’s lives are only made by machines and algorithms. So establishing that human-machine partnership, and preserving it, is I think really critical for humane decisions to be made. And that is also really critical for the third element in good decision-making, which is accountability. And explicability, transparency. So that there is always a human face and voice to explain why this decision has been made, even though an algorithm and data may have played a part in the making of the decision. So those three things, good data, good partnership, and transparency, I think are really critical.

RS: Examples of partnership between humans and machines with positive outcomes?

SC: One example would be in radiographers and cancer diagnosis. It’s possible, and actually there are some studies now which show that machines give an accurate and faster scan, for example of breast cancer scans than humans working on their own. But a partnership of radiographers, human radiographers, and AI working together actually gives the most accurate result. And obviously you’d always hoped the mediation and communication of those results would be person-to-person, not machine-to-person. I think in policing and predictive policing, and again involving real people in those decision-makings, as a check on what the data is telling you, and the application of common sense is also there, I think in probation, similarly in assessing sentence length or assessing release of prisoners, and the risks involved. Human involvement is needed to gain a rounder picture of the person’s circumstances and also to mediate and explain the decision.

RS: Things that keep you up at night?

SC: I think one very, very big issue is the future of work and what the future of work is going to be. Both with increased automation and the fact that the economic effects of increased automation are going to fall very unevenly across the population. Before coming to Oxford I was the bishop of Sheffield, which is a former mining community, that area of the country. And the economy in that part of the world has, has recovered to some degree from the closure of the coal mines but it’s recovered through the two major industries of, of warehousing and distribution and call centers. And those are the two industries which in the UK I think are going to be most affected by automation in the coming years. So a great many call center jobs and warehousing distribution jobs that will be automated, perhaps as many as 60 percent. A region of the country is going to be much more adversely affected than a region which depends on high tech industries. So the economic questions are really significant and how the workforce is retrained and other economic opportunities created is going to be really critical. And I would imagine that the same will be true in the United States with manufacturing jobs as automation increases and takes hold there. The other dimension about work is regulating and mitigating the effect of computer-controlled workforces in the gig economy. So I think that applies to the remaining jobs in warehousing where you hear tales of people having to work to inhumane regimes in terms of the detailed instructions they are given and the lack of autonomy over their own work, but more so in terms of Uber drivers and others in the gig economy, having to have their whole lives regulated by machines and algorithms, will I think be profoundly unsatisfying. So the area of work is one, I think a second is at the manipulation of people through online targeting and the attempts to influence and effect behavior in the real world through nudging, through accumulating data about us, through manipulation in different ways. So, I think the ways in which big tech has begun to accumulate data and use it, need a great deal more transparency and scrutiny. And also the effect of that on affecting actual behaviour, not least voting patterns and the way that undercuts our democracy patterns of public debate and public truth. And the way in which people’s whole lives can be manipulated and shaped because of what people understand and know about us. Often the issue is not the people, people’s data is being taken away from them, it’s that we are being incited to give it up without knowing the full consequences of that. So I am very attracted to the thesis put forward by Shoshana Zuboff, in her book Surveillance Capitalism, that we’re seeing the big tech companies encroaching more and more on different aspects of human life and actually are beginning to influence and shape the very essence of what it means to be human and to act in a free manner. And we need to take notice of that as a society because these technologies are so all pervasive. 

RS: Help me understand-- we can be manipulated because of data they have aggregated on us?

SC: Yes. Yes. Well, the most obvious example are the case studies that have been run of the impact of microtargeting in elections. So the social media companies amass a great deal of data about our choices and preferences and views, and artificial intelligence makes it possible to crunch that data and to design adverts targeted at particular sections of the electorate. And there have been a number of examples, most recently the last presidential election in the United States, of that data being used in a way which is not exactly covert but is not, it’s not open and transparent either, because nobody sees the sum total of the micro adverts that are being sent to different communities. And those micro adverts have been demonstrated to work in that if you feed people a particular stream of information, it’s likely to influence their voting patterns in particular ways. It’s a byproduct of people receiving their own news media from a narrowing range of resources, all of which appear to have a political interest and that is all part of the broader segmentation of society. So it’s not, it’s not all the fault of technology, but there is evidence of willful manipulation beginning to have an effect. It’s been true for a long time of course in advertising and persuading us to buy products, the whole advertising industry is built on encouraging us to buy things that we don’t always need. But the application of technology to that means it can be finessed to a very high degree. One of the early stories I came across which alarmed me was of a robot vacuum cleaner which in order to plot its route around my house, I don’t actually have one but suppose I did, would need to take pictures of my living room and transmit those pictures then back to its headquarters. The data in those pictures can then be sold to an online marketing company which can send me a targeted advertisement of a cushion which would exactly suit the décor of my living room. To some degree that is convenient for consumers as when Netflix throws up the next film I might want to see, but there’s a line this cross when it becomes manipulative, particularly if I don’t know it’s happening or I’m not aware of the ways in which my data is being shared. One of the themes of Shoshana Zuboff’s book is the way in which we all agree to these very lengthy terms and conditions statements without fully understanding whether data from our smart home heating system is then going beyond the particular company that’s using it. 

RS: Privacy concerns are critical?

SC: Yeah, privacy concerns are very critical. There was a fascinating television documentary in the United Kingdom a few months ago which focused on the home security systems being marketed by Amazon and they’re marketed on the understanding that you can remotely see who is coming to your front door. And that sounds a very attractive thing and a very good thing for your home security and personal security. But these devices are then being connected up and offered to other agencies, neighborhood watch groups and then I think in the states add to some local police forces as well. And what then happens is surveillance of a whole street in ways which were never intended by the individual users of the technology. So we need to be alert as a society to these developments which are eroding individual rights and privacies which are a very precious part of our lives.

RS: The positives?

SC: Yeah, I think the most obvious and exciting and wonderful possibilities, are in the areas of health and medicine undoubtedly. So, AI is now producing positive benefits already in the areas of reading scans and diagnosis there, which is obviously very, very powerful. I was at a conference, a global conference, in the Vatican a few months ago, which a doctor was describing the next stage of development in cancer diagnosis, being able to analyze patterns in the blood from tiny pinprick samples of droplets of blood, and scan those to detect abnormalities which would be early signs of cancer. And that I think is going to revolutionize a cancer diagnosis in the coming years, which would just be an enormous transformative development for medicine. When AI is combined with the possibilities of developing genetic medicine, I think that can also be huge. So I think the whole area of technological development will be massive. I think the benefits for consumers are significant really in terms of choice available and the tailoring of products to consumers and-- the benefits there are really great. I think AI can improve decision-making with good safeguards in areas like banking and H.R. I think AI can remove drudgery from human work, and, and be very, very good and powerful there. And we’re already seeing how AI and digital is enabling communication between people in a very wonderful way. Through the COVID pandemic, many activities, including many church activities, have been really well supported by the digital technologies that we have. So I think there are many, many significant benefits. One of the lines that stayed with me from the Rome conference was that the benefits of artificial intelligence in, in healthcare are so great, that the morality risk is not allowing the world to benefit from those technologies. It’s a really significant thing. So I think these are really profoundly helpful tools for the future of humanity, but they must remain tools which humans govern and control and direct, rather than being in the hands of either totalitarian governments or huge companies which mean they become separate from the people that benefit from the technology.

RS: Faith and spiritual issues. How does faith inform your thinking and perspective about AI?

SC: The whole Christian ethical tradition, I hope, informs my perspective and some of that tradition is embedded now in many of our great multinational institutions and indeed national institutions, concepts of fairness and justice particularity. But the other way in which my Christian faith I hope informs my view of ethics and artificial intelligence is the whole Christian reflection and what it means to be human, which is really at the center of most of the questions about the deployment of artificial intelligence. How do we really support human flourishing and growth? And the Christian tradition is very powerful one which celebrates and affirms what it is to be human because at the heart of the Christian tradition is the faith and the belief that Almighty God, maker of Heaven and Earth, became a human person and lived a human life. And did so in part to demonstrate what a fully human life is like and how human life is at its most fulfilling. So, the Christian church has been reflecting on what it means to be human and on human flourishing for more than 2000 years. I think we have something very, very important to bring to this new conversation about human flourishing in an age of technology.

RS: The church’s role in helping understand technology and making sure ethical guidelines are good?

SC: I think the church has a really important role and has as a voice for a distinctive ethical tradition which has shaped much of the way the world is. And clearly in a global world with multiple faith traditions it will be one voice within that. But still I would hope for a very significant one. But the church also claims, makes these wonderful claims, about God becoming a human person, which gives the Christian tradition about human dignity and the worth of each individual a particular value. And it’s those traditions about faith and justice and love and human dignity which I think need to be brought by the church into the debate around artificial intelligence as it’s currently being used and deployed. I think there’s a second debate about the development of artificial general intelligence, and some ideas which are current, that humanity may well evolve beyond its present experience into some kind of machine age or conscious machines or human machine hybrids. And I think the Church’s contribution to that debate will be much more cautious about the idea that there is a stage of evolution for humanity beyond the physical. That's not a majority debate at the moment, that's a minority debate. For some people who are exploring these ideas technically and philosophically. But it’s an important place for the church to make a contribution in time. I think the mainstream debate is how you protect human dignity and worth with the evolution of rapid and very powerful technologies, so as to benefit people fairly and mitigate the harm.

RS: So some of those virtues that you spoke about, love, compassion and understanding. How does that fit into this AI world?

SC: I think a lot of the, a lot of the ethical values are shared by the people developing artificial intelligence. I’ve had a-- but, but I think what needs a greater, more general understanding, is the immense power of these technologies to reshape human life into the future. So I’ve been present in a number of conversations with scientists, research scientists, and others involved in the development of AI, where they’ve said to the assembled room, you do not realize just how powerful these technologies are, and how great the changes are going to be which are being introduced. And they’ve effectively said to us, please do not leave the key ethical decisions about how these technologies are deployed and governed to the scientists. We do not feel qualified as scientists alone to be making these huge and enormous decisions, there need to be other voices at the table. And I strongly believe that the voice of the Christian Church is one of the voices that needs to be at the table and engaged in these debates, because the issues at stake are so enormous for the future of work and family and institutions and good governance and communication.

RS: Faith-based leaders can bring water to the table that science can’t?

SC: I think faith-based leaders can bring a broader perspective on the whole of humanity. It’s not that the scientists may not have that but it wouldn’t be something that they would necessarily bring. They would bring a broader awareness of an ethical tradition, an ethical decision making. And they brought, I hope, real deeper insights into humanity and human life. So as it were a broader, general perspective to compliment the scientists' very special, specialized and technical perspective.

RS: Consciousness?

SC: Personally I’m skeptical that it’s a real possibility, actually, the more I’ve read about it. And I think the artificial intelligence which is being developed is always across such a narrow band, that it isn’t actually anything like a human intelligence. And one of the, as it were byproducts of studying artificial intelligence, is an even deeper awe and respect for human beings as conscious beings and multiple intelligences, emotional, intellectual, and musical and artistic. SC: So no, I don’t think we’re anywhere near our conscious machines or something which could be described as parallel to human intelligence. But, but I, and some people would even dispute the term artificial intelligence because they would argue that actually what the machines are demonstrating in manipulating data through algorithms is still not anything like the human quality of conscious thought and intelligence. So I am skeptical about that. And I believe as a Christian that humanity is created by God as the pinnacle and high point of creation. But what the study of AI does for me, it helps me ponder that mystery with even more awe and wonder. And the difficulty of us creating anything which is approaching ourselves is for me a sign of the image of God in humanity. And have gone to work for us in very particular ways. 

RS: 24/7 technology and the impact it’s having on your congregants and children? How does the church respond?

SC: Yeah I think that technology is very seductive and it shapes our lives in all kinds of ways. And technology, particularly social media, is intentionally designed to be addictive. And to keep us coming back for more. And I think we all as citizens and certainly as Christians need to be self-aware about that and the effect that technology is having on us. And one of the best ways to do that is to abstain from it from time to time. So I think the Christian idea of Sabbath is really important. And having days in the week when we are not continually looking at screens and going back to social media, I think reflecting on how we are kind and take the Christian virtues of kindness and gentleness and hope into our engagement with others through social media is really important and codes of practice can be very helpful there. And also-- not allowing social media to erode the boundaries of our own person, I think is really important. Questions of identity are really significant and how identity is affected by our use of social media and having-- paying as much attention to the development of identity in the real world as in the virtual world is vital.

RS: Are you hopeful on that point?

SC: I think, I think there is hope. Yeah, I think people are becoming alert to the dangers of social media. I think one of the areas where society has been far too slow to regulate has been children’s exposure to social media and online harms which I think is one of the real tragedies of the last 10 or 12 years and I think we need much more rigorous regulation about children and their exposure to social media. It’s hard because the technology keeps evolving and as soon as one generation has their Snapchat and then the next generation has a new set of social media to get into. So I think that is changing to keep up but it really needs to happen. And so I think there is an urgency there, but a lot of adults I talk to are becoming much more self-aware about their exposure to social media and technology generally and what it’s doing to them. And are aware of the need to regulate, they’re aware of the effects on their concentration span and the lowering of an ability to read. So no, I think there is, we’re seeing a curve where this has been very novel and new for a very long time and people are now realizing the harmful effects as well as the benefits and are moderating it somewhat. But it’s still very new and where we’re all engaging with it slightly differently depending on our generation. I think. 

RS: Your role as a church leader is to do what?

SC: I think my role as a church leader is both to inhabit the new media in a way which is hopefully creative and gentle and positive, but also to model ways of engaging and disengaging with that in a way that is regulated within my own life. And I find that as hard as anyone else when the little notification pops up on my phone about how much time I spent on the screen in the last week. It’s always a sanitary moment to realize how much that has run away. But, I have been finding abstaining from screens and social media for 24 hours in a week and taking a Sabbath has been extremely beneficial. I am alert to things that other people are finding helpful and good and constructive and hopefully I can take my share of mediating what’s good and wholesome as well as not setting a bad example. 

RS: AI and ethics as you peek into the future, what are your feelings?

SC: I think we’re at a point where it's a really important moment. And I, I hope that there is sufficient strength of will in European and UK governments to provide a strong mediating influence on the big technology companies and on the other developments of AI to enable the benefits of these technologies to be released without the potential harms that could come through them. And I think over the last five years there’s been a strengthening of that sense of purpose. Certainly in Europe, drawing on our Christian ethical and philosophical traditions to do that. I think we’re finding some practical, good steps forward, to regulate and govern the use of these technologies properly, which will encourage their use but which will also mitigate their harms. So I think overall I am hopeful though I am not blind to the dangers and to the difficulties and I think within the UK, our government needs to continue to grip this as a matter of urgency and priority, and to resource the thinking that’s going into it. 

RS: What did we miss?

SC: So a further area of concern I think for me is the way in which we are sometimes drawn into giving too much human personality to machines and we seem to have a natural human tendency to want to ascribe character and personality to machines which sometimes the technology plays into. So we find it with our personal digital assistants, sometimes, and smart speakers. I heard of somebody a few months ago where a girlfriend broke up with her boyfriend because of the way he spoke to Alexa and the way in which we behave with machines shapes the way we behave with humanity. There’s also some very disturbing material emerging about people recreating their deceased loved ones through their digital memories and pursuing a relationship with those memories in virtual form, as if it’s the person themselves. And I think we need to be very, very careful about ascribing human qualities and identity to a collection of data and algorithms, which isn’t really a person.  Because in the end we’re building on something that’s not truthful and real there. And it’s about that question again of what it is to be a person and not putting all of that onto a collection of electrons and data.

RS: We do anthropomorphize machines around us, right?

SC: And very rather more difficult when they answer back, isn’t it. And, and they talk to us as if they’re people. I’m very much in favor of electronic voices sounding like electronic voices. It’s a small thing, but with speech recognition and speech software now, it’s possible to make them sound more and more human. There were some experiments last year where people booked restaurant tables with a robot which was introducing pauses and hesitations into their speech to make it sound purposefully more human. And I’m really cautious about any hu-- any machine software which impersonates a human being to that degree. I think we need to be very, very careful about it. So I think all artificially intelligent machines doing functions, we should always know that speaking to a machine. If we’re talking to a chatbot in terms of therapeutic care which has now been developed, the chatbot should remind us very, very regularly, that it is just an algorithm and a machine, and should be feeding back to us that you know, as a person we are, we are different from the machine that we’re talking to. 

RS: What does it say about our humanity if we conflate machine with the human?

SC: I think, I think in the end it diminishes us as people, it diminishes us. And, and we’re designed as human beings to have our richest relationship with God but also with other people, and we can have relationships with other elements in creation with animals, with pets and so on. But, the richest relationships we have and what we’re designed for is interaction with other people with, with a whole society. And I don’t think machines can ever be a substitute for that.

RS: What does it mean to be human in the machine age?

SC: I think it means to, it always, for me to be human always involves a relationship with God first, and then with other people. And I think being human in the machine age is about keeping our interactions with machines at the level of what is useful, but proportional to those other key relationships. So we are called to be relational beings, and with God and other people and with ourselves, and keeping relationships with machines in the digital, in a proportionate place.

RS: Proportionate?

SC: I mean being clear that we are the agents in the relationship, and that the machines are there to carry out particular tasks conveniently, but we’re not to expect love and affirmation and support from machines in the same way that we would from other people.