Transcript of journalist and senior media executive Richard Sergay's interviews with Vint Cerf, Nuala O'Connor, and Michael Wear for the “Stories of Impact” series.


Watch the video version of the interview.

RS =  Richard Sergay (interviewer)
VC =  Vint Cerf (interviewee)
NO =  Nuala O'Connor (interviewee)
MW =  Michael Wear (interviewee)

VC: My name is Vint Cerf. I'm Google's vice president and chief Internet evangelist. 

RS: New models of citizenship in a networked age?

VC: Well let's talk a little bit about what citizenship is like in this networked age. This is the world that we live in today. I think it puts substantial obligations on the part of people who are heavily engaged in the online world. Let me explain. In the past we had access to a pretty wide range of information through other media. We had newspapers, television, radio, books, magazines, and so on. Which-- which represented a substantial amount of content. And you'll notice that we didn't watch every movie and listen to every radio program. So we were filtering our power consumption of content. But it was also the case that for the most part the content was generated by a small number of producers and consumed by a large number of consumers. Today in this online world a lot of the content that we might encounter is generated by a much broader range of sources. So why is this different? Well in the case of a small number of generators you have the potential anyway to do a better job of validating the content and doing research and qualifying and so on, when you get into an environment where anybody and everybody puts up a tweet or you know, a page on Facebook or takes a photo and distributes it, you now don't have as many clues about the quality of the information that you're encountering. This leads to at least two very significant problems. The first one is that you now have the onus on your shoulders for trying to figure out what's good quality information and what isn't. It's critical thinking, you have to actually ask yourself, where did this information come from, is there any corroborating evidence for the claims and assertions that are being made. Might there have been some motivation for putting this information out to persuade you to do something or you change your mind about something. And certainly the 2016 elections and other things not in the US but in Europe and elsewhere demonstrate that the openness and the, and the reduced barriers for generation of content poses this big hazard for those of us who are trying to consume it. So we're now forced to apply critical thinking which is work, it's real work, and not everyone is prepared to put the work into qualifying the content that they're encountering. In the older cases, before we had all these proliferation of sources we had some reasonable expectation that somebody had spent a lot of money to produce the content and therefore we would go out of their way to try to improve its quality. Although I will say you can print lies in newspapers and say lies in the radio and television and so on. So you still had some due diligence to do. But now it's much harder because there are so many different potential sources. So that's one problem. The second problem is that because of the reduced barriers to the injection of content into the system you now invite a whole lot of malicious activity or-- I don't know what else to call it-- people who just want to inject noise into the system like vandalism. The opportunity to do that is so cost free that people do that. And so we've exacerbated the problem because we've made it so easy to put information into the system that you and I might encounter. The search engines like the ones that we run at Google and others are not necessarily capable of distinguishing good quality and bad quality information. You can imagine trying to put algorithms together to make the effort or the capability to decide whether this is good quality or not. We get a clue from the worldwide web because we look at which pages on the Web are pointed to by other pages and in very rough terms the so-called page rank. It's used at Google to order things when, when you get back hits from a search, uses an algorithm like that. The problem we have is that someone can use a botnet or other mechanical mechanism to make it look as if lots of people believe that this is important information even though it's just an algorithm running a whole bunch of computers that somebody has gotten control over. So we-- in our zeal to lower the barrier for access to information and to sharing of information we've also introduced Pandora's Box. So now we have to learn how to deal with that. Not only do the providers of the information, the media through which the information arrives have to think about this problem but as the recipients of that information you and I need to think about that problem too. So being a citizen is actually harder work in cyberspace than it might have been in our earlier incarnations.

RS: As a father of the Internet, did you foresee any of this?

VC: When Bob Condon and I were doing some of the original work on the Internet and on its predecessor the ARPANET, we had a small inkling of these issues but nothing like what we are encountering today. For example, in the predecessor to the Internet the ARPANET project which is funded by the Defense Department, about two years after that network came into being one of our colleagues at Bolt, Beranek and Newman, Ray Tomlinson, invented email. Networked electronic mail. And within not very many months we saw people creating mailing lists to distribute information about things they cared about, for example science fiction. So we had a mailing list called the sci-fi lovers and the engineers would fight with each other over who were the best writers and what were the best novels because they were geeks and they all read science fiction. But that was clearly a social phenomenon. Then we had another one that I recall from Stanford. It was called Yum Yum. It was a restaurant evaluation distribution list for restaurants that were in the Palo Alto area and eventually expanded beyond that. So we saw the evidence that this online environment had social implications to it, people were applying it in social interaction in addition to whatever computational projects that they were working on, remote access to other people's software and computing capability. I don't think however that I certainly didn't anticipate what happens when you get billions of people including the general public into this environment as in the earlier stages of Internet's Evolution only the engineers, researchers who were sponsored by the U.S. government had access to the system. And so that's a cohort of people who are, I would say not, not the same as the general public. And their motivations are not the same or weren't the same, their interest was in making it work, using it for their academic and research purposes. The general public on the other hand has a much broader range of interests and some of which are quite nefarious. And so when we opened it up to public use around 1989 and more especially when the World Wide Web showed up around 91 and Mosaic browser showed up and then Netscape Communications there was a significant shift in the content of the network and motivations for sharing. Then came just eleven years ago the mobile phone, I'm sorry the smartphone. Mobile phones came around in 1983. The smartphone showed up in 2007, which is now twelve years ago. Within the span of about a decade or so, billions of people have become addicted to their smartphones and its ability to both inject and collect information from the online world. And that's a phenomenon that we didn't experience in previous technology revolutions. 

RS: Citizenship is no longer geography bound on the Internet? 

VC: So an interesting feature of the Internet's design was that it is insensitive to and unaware of the location, physical location of any of the endpoints on the network. It doesn't care about international boundaries. Packets just flow across those boundaries without knowing the underlying addressing structure. Think of it as telephone numbers. But underlying the dressing structure of the Internet is not like the phone system. The phone system has country codes associated with it like the UK is plus 44 the US is plus one. We don't have any of those on the Internet. We do have domain names which are the next layer up. That talk about .UK and .FE, FR for France, or DE for Germany or dot US. But so the domain names have some indication of geography but even there we have what we call generic top level domain names like .COM, .NET, .ORG, which are insensitive to any country's boundaries all. So the consequence of all this is that the network erases distance and erases distinctions among different geopolitical boundaries and that's part of its strength in a way because it means that it doesn't matter what time zone you're in and how far away you are, you can be part of a cohort of people who are working together, talking together, arguing together, and it's-- because you're using computer mediated communication it doesn't even matter what time zone you're in because for example you can write an email at five o'clock in the afternoon and someone else won't read it until 10 o'clock the next day in their time zone and that's fine, it works just fine. They answer and eventually you get the answer and you go back and forth. So this is a very different environment than the one where you had to be two people to interact with each other and had to be awake at the same time. 

RS: So what's that mean for citizenship? 

VC: It does create an interesting let me call it attention where people who are scattered around the world in different geo geographical and geopolitical locations may find themselves more bound together by their common interests than they are necessarily by their citizenship by country. 

RS: Netizens?

VC: Netizens is a wonderful term for that. I think that in today's world, you're speaking in 2019, We are actually seeing a shift back away from this notion of it doesn't matter what country I live in, my interests and your interests coincide. We are part of a cohort of compatible interests. We're starting to see the resurgence of nationalism in our geopolitical world. And I think that is actually creating waves so to speak in this online environment. It's re-imposing the geopolitical boundaries that I had hoped the Internet would erase.

RS: If you are a netizen, everyone is a neighbor? 

VC: A lovely phrase here is that if you're a netizen anyone is a neighbor. It makes me think of a famous television program about Mister Rogers Neighborhood. Everybody was a neighbor.

RS: How does that impact human obligations in a world in which billions are connected. What does that mean?

VC: Well I think that there are ethical demands that are placed on people who are part of this online environment. Not everyone is willing to accept those ethical demands. But I think that because whatever you do and say can have a very material impact on a human being, that you have to think about, you should feel responsible to think about what you're doing and how, what the impact is on what you say, do, and show. Now unfortunately there are other people who have no ethics at all and in fact they deliberately put things up on the net that are potentially harmful. And so when you see bullying, when you see terrible stories about teenagers who have been hounded online and have decided to kill themselves, I mean that's about as unethical as you can get. And the difficulty we have in that environment is figuring out where the perpetrators are and where the victims are and the worst problem is the one where there is an international boundary between the two of them. What cooperative agreements exist between two countries to deal with their harmful actions that have been taken. And we don't have tools yet for dealing with that. We don't have common treaties, either bilateral or multilateral, and that's what we need. We need something that the secretary general of the U.N. has highlighted, his High-Level Panel on digital cooperation. Exactly asks the question what kinds of cooperative activities should countries undertake in order to reduce the level of harm and hazard that one might encounter in this online world.

RS: Do you think there's a common good anymore that ties us together as one community via the Internet? 

VC: Well I would like to think that as human beings on a common planet whose existence we depend upon we don't have any other place to go. Yes, some people want to go to Mars but I can tell you it's a pretty inhospitable place and the planets that we've discovered that are circling other suns are too far away to get there in any reasonable amount of time. So this is the only planet we have. I had wished that this online world would highlight for everyone how co-dependent we are on each other's actions. And I think that that message is still struggling to get through and it's becoming, it's dissipated in part by the rise of nationalistic views. Ian Bremmer puts it a slightly different way. He runs the Eurasia Group. He's written several books, one of which is called G Zero which relates the G7, G20, G7070. There isn't any left, everybody for themselves. So the Gs are always essentially countries becoming very much focused on their own populations and not necessarily paying attention to the effect of their actions on anybody else. That's not a good trend. And what I had hoped the Internet could do and I still hope it can do is to draw people's attention to the consequences of their actions on others not just in their own countries but on the planet. This-- it would be a pity to squander the collective ability to recognize problems and work together to solve them that the Internet offers, instead of fragmenting into a bunch of very self-serving environments.

RS: So the Internet itself can help build a collective wisdom around the issues that lead to better citizenship? 

VC: I think that we have to be careful to imbue the Internet itself which I view as a purely technological entity which is linking computers together. Then there are the applications that sit on top that you and I make use of. I think that this system of network and applications has the potential to help us cooperate with each other. But, we are the ones who have to decide we want to do that. The network does not force us to work together, it doesn't force us to reason together. It doesn't force us to cooperate. It merely provides the potential to do that. We have to decide what we want to do. 

RS: Does that speak to  a moral obligation of citizenship in a networked age?

VC: Figuring out how people decide to adopt and take on board moral obligation is way beyond technology. It's a question that our society needs to answer in some sense. Individuals, given their own natural interests are likely to be interested in their own well-being or the well-being of their families and not see more broadly than that. So this is a sociological question, it's not a technological one. The Internet may be helpful but as I've already pointed out it may be harmful too because of the misinformation that flies around in the net. So it's not going to induce the kinds of ethical and moral obligation that I hope people will feel, it only can be helpful if they feel it, and give them an avenue for exercising those obligations. But it doesn't force them to do it at all. 

RS: On balance, do you see your creation in terms of citizenship as has a positive impact or negative or obviously both? 

VC: Well let me offer a kind of metaphor for a moment and I warn you it is a metaphor. Imagine for a moment that you hear Bob Cone and I'm Vint Cerf and we're trying to figure out how to design a system that allows an arbitrarily large number of networks to interconnect. Imagine that we were designing a global road system and we know that each of the roads, each country is going to build its own roads. We're not going to build the roads for them. But we tell them how to design the road so that the interfaces between countries will match up. The same size vehicles can go on them. They have to be no taller than something or heavier than something or wider. We give them very broad standards by which the vehicles should be built in order to make use of this road system. So we offer recipes for building roads. We offer recipes for the rules of the road. Please drive only on one side not both sides, please have signals to adjudicate traffic. And oh by the way presumably adjacent to the roads are going to be buildings we have nothing to say about that. You can choose what kind of building,  how tall it is and what it does. We said nothing about what vehicles do. I mean other than rules to keep them from running into each other. But we didn't say anything about what the vehicles carry, how big the vehicles are, other than fitting on the road. So there was in the way I think of the Internet as an infrastructure like that, it's a road system with some rules. But the freedom to design the buildings and what they do in the vehicles is very open by deliberate choice. We didn't even say what material the roads need to be made out of. And so we left open. You know you can use Canam, you can use concrete, you can use you know crushed rock in the same sense that we didn't require any particular technology for the networks to make up the Internet as long as they can carry packets from one place to another. Bits, that's all that's needed. So I see I see us as having provided the infrastructure or the recipes for the infrastructure and the freedom to build and connect, again not dictated from the top, but then the uses that get made are not dictated by the technology except possibly for limitations like the technology doesn't let a car run more than 100 miles an hour because if you go faster than that you fly off the road. Maybe we don't have technology that lets you run more than 100 gigabits a second until someday we have a technology that will let you run a terabit a second in which case it gets all upgraded and we all run faster. The architecture didn't change. So some of these parameters could, could change. So I think we offered this very open environment in which lots and lots of things could be built. Lots of services could be constructed and offered, lots of business models could be applied, but there is nothing that forces people to do moral as opposed to immoral things or harmful as opposed to beneficial things. All we can hope is that the people who end up using these technologies choose to do things that are beneficial for themselves and others and not harmful. But we know because we all read Shakespeare's plays that are 400 years old, that people have motivations that are not necessarily always beneficial. And so we have to be prepared for that.

RS: How do you maintain good citizenship in an age of increasing decision making by machines?

VC: A lot of people are very worried about the use of algorithms to decide, make decisions about various actions. So for example if you're trying to get a loan from a bank it might have an algorithm that decides based on what it knows about you that you shouldn't get a loan because the algorithm thinks that you're not worthy or not capable of repaying a loan. I think being steered by algorithms is not the best course for us. I think some of these algorithms can be useful as advice givers. They can talk about correlations but they often don't necessarily speak to causality and that's why for really important life determining decisions I would prefer to have human beings in the loop. Think of the things that we've done with machine learning that help us identify various medical conditions like diabetic retinopathy by looking at the back. You know taking a picture of the retina and using a computer algorithm to detect whether or not this retinopathy appears you might say well how do you do that. And the answer is the machine learning mechanism looks at hundreds of thousands of images which have already been classified by experts. And once you've trained it then it can make as good or better decisions that a human being can but you might still want to say to the human being, the doctor, my estimate is that this is a case of a diabetic retinopathy. I'd like the human being to be persuaded by the algorithm that this is correct based on that person's experience as opposed to simply assuming that the algorithm is correct because we know it will be in all cases. So we should not allow decisions to be made autonomously without human intervention especially if they are significant decisions. Think of a self-driving car for example deciding where to go. I think it's very important that there be ways of assuring that those algorithms get tested by humans who are looking at decision-making and determining whether or not it was algorithms or making decisions that look like they're safe.

RS: So peeking into the future could one day an algorithm and decision making be sophisticated enough to know better than me how I should vote or what I should do as a citizen. 

VC: So this is an interesting possibility. It reminds me a little bit of a 1909 story by E.M. Forster called The Machine Stops. Written in 1909. He posits an environment, a society where nobody leaves their homes, the machine whatever it is, he doesn't really describe it in detail, delivers food. There is a way for them to interact with each other visually and orally. Think about 1909. So the telephones were around for a while. No television. So people are isolated and they interact in this remote way and then one day the machine stops working. The question is what happens to that society. And I will tell you the answer yet, read the book. But the point I want to make here is that if we allow ourselves to become overly dependent on machine-driven decision-making, then we may foreclose a whole bunch of possible options that we should have and could have considered. And so once again I think that it's important for us not to allow algorithms to dictate what we decide to do. They should inform but they shouldn't be the sole determinants of decisions that we make. 

RS: Machines are good at correlation but not so good and causality unless you have a model of causality to go along with the correlation?

VC: So some people speculate that machines will someday be better than we are at making all kinds of decisions and I am willing to accept that possibility as one universe that we could end up in. I've seen enough however of the br-- brittleness of the kinds of mechanisms we can build today to be a little suspicious of this. In fact it's a little more nuanced than that. I think what happens is that sometimes these algorithms work extremely well. They actually are better on average than a human being making a decision, especially if they've been exposed to hundreds of thousands of possibilities that can, have been told already by what the experts say that you should, this is an evidence of x or y. The thing that makes me cautious about this is that those same very deep machine learning mechanisms have also been shown to have brittleness in them. What do I mean by that? We have instances where the machine is trying to classify an image and they decide it's a cat or dog or a kangaroo or something. But also in just a few pixels of the input image can cause it, the machine algorithm to announce it's a fire truck. And your first reaction is how could that possibly happen. I mean it doesn't look like a fire truck. I can tell it's a cat. The answer is that the machine learning algorithm itself may be heavily dependent on certain pixels in the image to come to its conclusion. And slight changes in those few very, very important pixels may cause it to go off to some completely crazy conclusion. The fact that these algorithms can be brittle means that we should be very careful about simply accepting whatever decisions there are. I would look for evidence before reaching that conclusion. Look for other correlations, other evidence that this decision is correct. That's why we can't escape, I think, by injecting human judgment into the picture if only to be assured that we haven't gone off into a clearly crazy conclusion.

RS: Bias in algorithms?

VC: There is a, in the machine learning world. There is a training mechanism that's used in order to present in some, if it's an image classification application, you present the image. And you may tell the system whether, it will decide you know this is cat or dog or something else and you tell it yes it is or no it isn't. That's a supervised training method. There are also things called unsupervised learning where the system keeps looking in images and compares them and tries to separate and distinguish them into different categories of its own. I think that we don't fully appreciate yet how biases in the training material can lead to biases in the decision-making. We may not even notice that there is bias. The one question is how do you tell if the training mechanisms are biased, that the training samples are biased. So you see a whole branch of machine learning and artificial intelligence trying to detect potential brittleness in the system and biases in the system. And once again unless you have the ability to observe the system making choices and draw your own conclusions based on your own experience you may allow the system to make mistakes. Now human beings make mistakes too. And so we should not allow ourselves to say that perfection is the only acceptable use of these kinds of algorithms, in fact if the probability is higher that the algorithm does a better job than a human does then we should accept if we can have evidence that we might accept the machine algorithm over random person's choice. But always recognizing that the algorithms might in fact be either biased or brittle.

RS: How do you think those two issues of AI and machine learning will impact or affect citizenship in the 21st century?

VC: So this is a really interesting speculation because it's true that for increasing numbers of applications that people are injecting more AI, a little more typically machine learning mechanisms into the system. And they do remarkable things. I mean speech understanding has improved dramatically with the introduction of machine learning. Speech generation has improved dramatically even with the introduction of machine learning. And we've harnessed this, Google and others, in many very beneficial ways. Language translation for example benefits enormously from machine learning tools. So I'm back to this conundrum that we see the mechanisms doing extremely useful things, powerful things, dealing at scale that you and I couldn't handle on our own. But at the same time this hiding in this whole system is the possibility that it's wrong. And I think that the research that's going on right now here and elsewhere is related. Some of the research is related to understanding how these kinds of things can go wrong. Mistakes can be made. Analysis can come to the wrong conclusion in order to assess whether a particular algorithm which has been trained has the possibility of failure and in what, what is the nature of the failure. The risk of hazard from a failure is going to vary dramatically depending on the application. So in some cases you don't care. A mistranslation in a casual interaction might be perfectly ok, you might clear it up by additional interaction, on the other hand suppose that there is a negotiation going on and you're trying to reach a certain compromise. But the translation turns out to mislead one party into thinking that there is agreement and the other party doesn't. We have to be conscious of how these various systems can fail in order to know when we should adopt additional means of, additional tools and additional practices to reassure ourselves that we haven't relied on something that made a wrong choice.

RS: Safeguards.

VC: Safeguards. Yes exactly. And we don't even know where the guardrails need to be yet. We're in the state now where our astonishing results with machine learning is leaving things like the AlphaGo program from DeepMind, the one where four out of five games go from Lee Sedol, was a dramatic event. So it's a really hard game. It's 19 by 19 as opposed to eight by eight on the chessboard. And yet the system was capable of winning. This is a very deep, you know narrow and deep skill. Then my colleagues at DeepMind broke what was called Alpha Zero which trained itself to play. We gave it no advice, nothing, no examples of human play at all. And it came up with tactics that no one who had ever played Go before had ever used. It was sort of shocking to the grand masters of Go to see some of the choices that the machine algorithms made. So I'm actually enthusiastic about the use of these technologies because they have been remarkably adept at dealing with diagnosis and dealing with correlation and machine translation and the like. But I'm equally nervous about our lack of knowledge of, in the ways in which these things could fail. And so we need to be very thoughtful about detecting these kinds of failures. 

RS: Human decision making of the future particularly as it relates to citizenship, that machine learning, AI, can make a better decision about who would be a better mayor or a better president or a council member than leaving it to humans?

VC: Well it is easy to imagine humans relying on computer-based analysis and decision in the belief that that's more accurate than any human being could do. We can fool ourselves very, very quickly to assume that it must be right because it's the computer and they never make mistakes. The important lesson is they do make mistakes and we need to understand how to detect that. So we're back to a really interesting phenomenon. I see these things as tools to do things that I could not do as a human being just at the scale at which these machines can do work to analyze billions of text or images and what have you. I couldn't do that on my own. And so I see this as a tool. What I don't want is to rely blindly on the output of these various algorithms without having a deeper appreciation for the ways in which those algorithms could be misleading. And so this is another moral obligation that we have is to not allow ourselves to be dictated to by the machines, but to see them as tools. 

RS: What we see is basically inputs that we engage in that platform and it spits back what we think we ought to see based on data that may or may not be true. Does that worry you?

VC: The notion that these systems have essentially feedback loops that guide us in directions that we have a proclivity to go in, is a very real issue. This is, people call an echo chamber effect, confirmation bias. Things like, which by the way, happen in all realms I mean it's not computer-based stuff exclusively. We all suffer, including scientists from confirmation bias and you know you do have a theory, you do an experiment, the results come out almost right, except for this one data point over here. Some scientists will dismiss the oddball data point and say probably measurement error and go on happily with his or her theory. The true scientists will say that's funny and then try to figure out where that data point comes from. That's often the person who gets the Nobel Prize who discovers that our theories are wrong. And this data point is the indicator that we need a new theory. It's so easy to relax into a belief that if the system is almost always reinforcing your theory that it must be right and you begin to imbue it with increasing amounts of authority. There is a related phenomenon. Sherry Turkle at M.I.T. talks about this in her book called Alone Together. The first part of her book is about how people relate to robots and in particular how they relate to humaniform robots, robots that look like they have a face. Then what happens is that people project onto this robot a belief that it has a great deal of social intelligence that it actually doesn't have. And so as they're interacting with this thing, if it seems to ignore them they get upset because they think the robot is ignoring them, they think they've been insulted or something. Of course the robot has no idea what's going on because it didn't, it didn't have the social intelligence that you thought it had. It wasn't ignoring you, it just didn't know any better. The fact that people are willing to imbue these things with such depth means that we could easily fall into a habit of assuming that the machine knows what it's doing. And it did what it did and we should accept that or to be angry about it or whatever other reaction we might have. By the way there's a tangent here which is very interesting to me anyway. And that's the one where you, where you have increasingly human looking features on robots until it gets to the point where it is so human looking that it looks to you like it's a dead zombie that's come to life. This is called the uncanny valley where you cross into really creepy territory. And so people were actually more comfortable with cartoonish kinds of human form robots as opposed to things that are so realistic that they look like they are corpses come to life. 

RS: How many times a week doesn't happen is often more but scream at the car and say that damn car. It's a machine.

VC: Yes. We've often had bad words to say about how stupid the machine was. Now in some cases it's actually true that the program that was running in the machine was stupid made the programmer who wrote it didn't take into account all the various cases and as a result the machine does something really dumb and your reaction is that's dumb. And it is dumb but it's not the machine it's whoever programmed it, it didn't get it right.

RS: Is AI a necessary resource going forward?

VC: My sense right now is that a good citizen recognizes the limitations of online environments. Understand some of the feedback loops that those online environments produce, give you some examples. If you think about the reward structure for some of the social networks. So you get rewarded on YouTube by how many views of whatever you put up you got. Or the number of followers you had in Twitter or the number of retweets of your tweet. Or there are a variety, I guess Facebook, you know how many likes did I get. What's interesting about the dynamic is that it's often extreme content that generates the biggest reactions and it may very well be that these social media feed extremism if the metric that you're looking for is the one that garners the biggest metric number, whether it's views or you know likes or-- or something else. So we may have inadvertently built into some of the online media, social media, feedback loops that drive extreme behavior. And I think it's incumbent on us to understand that and to recognize it and to cope with it. And whether we are just you and me recognizing that we're being reacting to this feedback loop or whether we're the providers of the service recognizing that we've created those feedback loops, I think the important part of the moral landscape here is to recognize some of those phenomena and to in fact deal with them.

RS:  Can algorithms help us navigate moral questions? 

VC: So the question is whether algorithms can help us navigate moral questions. And I am resistant to that notion. To give you an example of what I consider to be a false dichotomy. It's a classic conundrum. You're driving-- the self-driving car encounters a car full of nuns and a bus full of children and it's going to run into one or the other of them with dire consequences. How does it decide? Well generally speaking I consider this to be a false dichotomy because the algorithm may not know that it's a car full of nuns and a bus full of kids, all it knows is it's a car and a bus. And the best way to program this is to program it to avoid running into anything as opposed to deciding I will actively design them and then run into the nuns because they're older and therefore they are-- deserve to live less than the children do. I don't think that you ever will see programming stuff like that because I don't think the algorithm will ever have enough information along those lines to be confronted with that decision.

RS: Can algorithms be self-governing?

VC: Well there's an interesting question. Can algorithms be self-governing? So the answer is sometimes. And it depends on the way in which an algorithm could fly out of control. If by governing you mean keeping it within some range of behavior. If there is a measurable way of detecting out of control then the answer is yes if you can measure out of control and you can sense that the algorithm is doing something it's making decisions and choices that are harmful and you can, you should be able to detect that and do something about it. The real problem is how do you detect out of control or harmful and if it isn't measurable in some algorithmic way then we may not succeed in detecting that algorithm has gone off into this space that we didn't expect. So measurement here is really important. And it's funny how important measurement turns out to be in general. If you think about the advancement of science it's almost always the case that we learn new things when we are able to measure something more accurately than we ever did before. And so, it's this measurement capability that drives you know, new models of scientific thinking. If you can't measure it it's not clear what-- what results you can get.

RS: Do you think algorithms can be self-adapting? What is the scope of that adaptation?

VC: So almost every piece of software that does something has a kind of limitation on the scope of what it can adapt to. You know how many different parameters you can have and what's the space in which the algorithm can fill. I think you can build adaptable algorithms but almost always they have to be adapting to situations that they have accounted for already, that are anticipated. Unexpected events, if they, if they had not been anticipated it isn't 100 percent clear how an algorithm will know what to do because if it doesn't recognize the state that it's in, which is the state it didn't want to be in. So there are situations for example where in the case of a self-driving car you can tell you're going to run into something. The algorithms can be pretty clear that if you're going at X miles an hour and it's as far away and you don't slow down it will hit it. Whatever it is. For some value of it. Those are fairly well-defined ways of discovering that you have a problem. It's much more subtle when it isn't simply calculated that some unacceptable thing is going to happen. You might imagine, suppose that you're using your algorithm to decide whether somebody gets out of jail. It's not always clear that the algorithm has done something harmful or hazardous. There isn't any simple metric of that. Unless you might have a much longer feedback loop where you impose an algorithm and then you measure the results over a period of time and then you look for bias. And so you can imagine loops within loops where you have algorithms that are looking for bias based on the results of some other algorithms running. In that I can imagine being helpful but you have to be very thoughtful about how you detect that something is biased. If you have no examples of the bias it may be very hard to detect that there exists some.

RS: Could AI engage with the goal of optimizing love for God or a neighbor in a given context?

VC: What a fascinating thought. Could an algorithm somehow improve my ability to relate to other people, relate to God. Whatever God it is that you might be interested in relating to...

RS: Suppose a group of citizens has the aim to love God or to love a neighbor. In hard situations they wish to enlist the help of AI. Could I engage with the goal of authorizing the love of God. 

VC: Well let's suppose for the sake of argument that an algorithm is keeping track of the decisions that you make. And we have to make up some things here. We have to decide what, how, and how the information about the decisions get to the machine. How does the information about the consequences of the decisions get in the machine? If you decide for example not to hire so and so and so and so ends up in poverty and on the street and homeless and everything else. Is it possible that you made a bad decision. It will be very hard to detect that you made a bad decision unless you were able to track that party after the decision got made. It's possible that the side effects are a long time away and sometimes are instantaneous. Suppose you're a surgeon and you're doing what the machine is urging you to do, recommending you to do this and it turns out you just cut something important like vagus nerve or facial nerve or something like that. It may be instantaneously clear that something bad has just happened and that you were guided there by a machine. If the loop, feedback loop, is near enough time then you could easily imagine an algorithm that's helping you either predict or detect that you've made a decision that was harmful. If the feedback loop is too long you may not even be around before it's discovered that a particular algorithm produced a bad outcome or had a cascade effect. And so figuring out how to detect some of those negative results may turn out to be just as hard as figuring out how to make the algorithms work in the first place. 

RS: If you think of some of the positive ways that a guy is already promoting human flourishing what would you say? 

VC: I think that existing AI algorithms have, many of them have in fact produced just astonishingly beneficial results. To give you a few examples. The ability to do automatic real time translation of language, even if it's not perfect, engages people in conversation that they could never have. The ability to translate texts allows me to experience and read and learn things that I might not be able to learn because I can't speak that other language. If I'm trying to make financial planning or do financial planning you can imagine algorithms based on an enormous amount of experience about the decision, the conditions under which certain choices were made, might help guide you to making choices that are very beneficial for you. Think about medical diagnostics and things like that. So there are lots and lots of cases where I think these algorithms could be enormously beneficial. The only issue that I continue to have with these things is, is the brittle-- potential brittleness of the choice. I think there's one other thing that's very important here, a point to be made. Much of what we see today and call AI is a very limited form of machine learning and algorithmic process. Many people are really after what's called general artificial intelligence. The thing that human beings do that I don't believe any machine algorithms can do yet if they ever will, is to take a very small number of samples of real world and generalize from that, create a model and allow you to think about and reason about not only the model but the possible changes in that model, what are the consequences of that. They give you a simple example of how easily people view this generalization. Imagine that you're a 2 year old or something. And you're presented with a table that has glasses and dishes and things like that on the table. It doesn't take very many examples of that to recognize implicitly, not explicitly, recognized implicitly, this is a flat surface perpendicular to the gravitational field. Anything that satisfies that requirement can be a table. And so you quickly generalize not thinking in those terms at age 2 but thinking, recognizing that fact, flat perpendicular to the gravitational field means a chair can be a table. A box can be a table, your lap a table, because it satisfies those requirements. Humans are astonishingly good at abstracting the real world, building models, and then reasoning about how those models work and what would happen if they changed the model. By which I mean change the real world to match a different model predicting what might happen. Speculation, imagination, invention, innovation, comes from this ability to generalize, analyze, and change. No machine to my knowledge has that capability except in extremely narrow circumstances, propositional logic, first order predicate calculus, maybe. Few examples of that. Anything else that a typical two year old can do, you can forget it. The machines are terrible at it.

NO: I’m Nuala O'Connor, President and CEO of the Center of Democracy and Technology. We are a 501c3 charitable organization, organized around the principle of human dignity and human rights in the Internet age. So we fight for privacy, freedom of expression, freedom of association, anti-government surveillance, and any number of online rights issues like net neutrality and copyright protection, all sorts of stuff. We are funded by foundations, by companies, by individuals, by fundraising events. We have no one major donor and we stand for all human individuals.

RS: New models of citizenship in a networked age?

NO: Well the Internet has been the great democratizer of individual voice and of many freedoms, many forms of freedoms. I would probably say that with freedom comes responsibility for all of us as individuals and as leaders of companies or organizations. The potential I think for the Internet and Internet enabled technologies to further democratic institutions of governance and values and principles is still unparalleled probably in other technologies. Not since the printing press has there been a voice-enabling and amplifying mechanism that so profoundly reshaped the relationship between the individual and the information they seek to disseminate or that they seek to receive. But with that destabilization have come all sorts of unintended consequences. 

RS: Such as?

NO: The conversation we are having right now about conversation online, about who is responsible for the creation of community and community norms and norms around how information is elevated or disseminated, and frankly how the very algorithm or the very institutions that run the spaces online which are largely still private sector spaces, what responsibilities and roles and rights they have about the information that they share with their individual citizens or users or end users. The Internet has profoundly reshaped the relationship of individual to state, individual to company, individual to information. I think in largely positive ways. But the unintended consequences of the greater access to information and greater ability to reach more listeners means every piece of information is given equal weight on the Internet. I always like to say everything can look exactly like the New York Times right. You can use the same masthead, you can use the same font, you can use the same size product, and it looks exactly the same to a reader and so I think there's a really great conversation happening right now about journalism and the Internet. And what are the new dissemination norms and who should bear both responsibility and profit from those models.

RS: Define for me what your understanding of citizenship has been and what is in the 21st century knowing the technology that we now have?

NO: I think citizenship, at least as we think about it in the United States, is far more of a participation sport than perhaps people had been thinking about it as in the last several decades.  I think perhaps we had gotten a little complacent that the institutions would run with little interference or little input from average citizens or voting citizens. I think at a minimum we are privileged to live in a country where people have the right to vote and should exercise it. But even more, we now have information at our fingertips about how agencies at the federal, state, local, municipal level are running, and we all need to exercise our duty of care to democracy. I'd like to say we're having a national conversation about democracy. Are you for it or against it? And I think it's time to take a stand. I'm certainly for it. I think there's no better organizing function or principle to how humans agree to disagree. The Internet unfortunately makes that faster and louder and I think it may in some cases not always elevate, when I say the best I don't necessarily mean the best speech in terms of how it is presented or even if I agree with it or not, but simply productive to furthering constructive dialogue and positive outcomes for the democratic institutions that I believe we all hold dear.

RS: How has technology reshaped citizenship?

NO: I can think of a very positive example, at least I think it's positive, that in the net neutrality filings millions of individuals in this country wrote letters, sent emails, sent one word messages to the Federal Communications Commission saying we want this. We believe in this, we believe in an open Internet and the minimum amount of friction or interference. I don't necessarily think we want everything to run by plebiscite but I do like that the Internet and Internet portals make people feel and actually be more connected to decision makers and lower the barriers, lower the differential between people in positions of power and people who live far, far away from Washington, DC for example. What it also means though is that people can get really incorrect information that, what's the old, is it Twain who said you know a lie can go around the world before the truth puts its pants on. The challenge I think for all of us is to be both better consumers of information and also maybe discerning readers. And so I think we've got a huge road ahead of us in terms of media literacy and digital literacy for an informed citizen, an informed electorate.

RS: It sharpens the need for an informed electorate. Why is it different today?

NO: I think one of the challenges of living in the Internet age and living in the digital age is the glut of information. Entire books have been written about glut and about how we ferret out information. Not only that enhances our confirmation bias but that actually helps us understand the world we live in. I think that was one of the kind of aha revelations of the Cambridge Analytica, of revelations of early 2018, was that people realized their world view was not necessarily being enhanced by seemingly trivial information that was collected about them, fed into a psychometric profile, fed back into the algorithm, deciding not only what advertising they were seeing potentially but also what content. And resulting possibly in the closing of the aperture of what they understood about their, in this case the election in the United States or the Brexit vote in the UK, or other consequential events for not only their personal lives but their political and social organizing lives as well. And so I think people are waking up to the realization that minute details about them, things that seemed entertaining and funny and trivial online could be being used to decide what they were seeing and knowing and understanding about the world. We are absolutely citizens. Here at the Center for Democracy and Technology we are still organized around the principle that humans deserve dignity and agency. At the Center for Democracy and Technology we are organized around the principles that individual human beings deserve dignity and agency and autonomy. So we will go down fighting for the rights of the individual not only in the digital age but in political systems and societal organ-- organizing structures. I think that we are citizens and I think our citizenship should inform not only the things we do and say and engage in in the public discourse but the decisions we make in institutions as well. I mean I think if you go back to old Socratic criticisms of democracy it was that individuals would not necessarily participate and engage. And that also institutions might act against the interests of a democratic system of government. I think we have to decide again are we for it or are we against it and are we acting in our personal or professional capacities in furtherance of democracy which is a messy and not a spectator sport.

RS: Why are we even having a debate about whether you're for or against democracy? 

NO: I don't think that 200 plus years into this experiment we are actually all that mature a democracy. I think also every generation needs to regenerate its idea and its construct of what democracy means and looks like. And I think that technology in this case, the sweeping changes of the current version of the Internet which is an Internet enabled everything, and the future versions of artificial intelligence and really sophisticated and embedded technologies in the dashboards of your car, in the walls of your house, the walls of your child's school room. These are consequential changes in how we relate to the world around us and how we relate to the institutions that we believe serve us. And I think embedded again I just tweeted out this morning, embedded in not only the algorithm but the architecture and the infrastructure of these systems and the institutions that are a part of the Internet ecosystem are eventually the biases of the creators themselves, the individuals who program the computers or set up the systems or whatever. That's not an Aha, that's not a great revelation, nor is it even a criticism. It is simply and is that we all come to the table with our worldview formed by who we are and where we've come from. I think we need to stress test these algorithms, these systems, these architectures as they become so opaque and so embedded in our lives that we take for granted that the system runs and that the lights turn on and that you know that the coffee maker turns on at 5:00 in the morning I love that part of it. But are these devices we are creating, embedding in our lives, serving all of our needs and are they serving equality and democratic principles? Or are they embedding and reinforcing biases that we all have clung to for many years in our lives.

RS: These algorithms may have biases we're not aware of?

NO: I think that's right. And I again, I hope to take the temperature down about that line of reasoning because there is a lot of yelling and finger pointing and criticism that the biases reflect a particular gender or race or ethnicity or national origin and that may all be true given some of the educational structures in our country and elsewhere. But I also think it's again it's simply a fact, we each bring to the table our experience as human beings. I think the more diverse human beings and experiences we have at the table including political viewpoints and viewpoints and economic viewpoints and points of view about where in the world and what values need to be served enhances and enriches the creations that we are offering in the technological world. What I worry about is, as these again as these devices or decisions become embedded and opaque to the end user or so fast and so automated, which is a benefit of the technology but also a peril that we don't even have the time to question or that we're not even aware that what we are being fed or what we are receiving, whether content or action is profoundly different than someone of a different race or gender or ethnicity.

RS: How do you maintain good citizenship in an age of increasing decision-making by machines. 

NO: I think first is to exercise some discretion about machines you choose to rely on. I know that in recent years I have very intentionally tried to diversify not only my news sources but the media, and I mean that literally, the media that I receive. Meaning I had gotten a little lazy perhaps and getting on with my news and information and newspaper stories via several major Internet portals which-- with which we are all familiar. I, after our recent round of elections, actually elected to get my newspaper delivered on the front doorstep again and the first day, the first Sunday that the several newspapers I signed up for again in paper form arrived, my then 6 year old son said to me Mommy, what's that?'' I didn't realize he had never seen a paper newspaper delivered to the front door. I have been getting my New York Times, my Wall Street Journal, my Washington Post, you name it, all delivered to me by email or on the apps on my phone. That isn't-- that was an aha moment. I didn't realize I was so biased in my media and my channel selection. But even more I think seriously about the content and the choices of content that we all receive. But this is not to place all the burden or the blame on the individual either. I think we really are having a conversation now at least in the United States and in many parts of the world about the role that intermediaries on the Internet play, whether they are intentional or unintentional intermediaries of information. Or whether they even intended their platform, in some cases sources and portals that were intended to be used in one way socially or otherwise are now being used to receive large portions of Americans and other countries news and information about the world, whether that was the original creators intent or whether that is how most people are using the portal. We need to confront what is in front of us and not what we wish it were.

RS: Define citizenship. 

NO: Citizenship to me is the active participation of an individual in the organizing construct of social or political institutions in their community. And I think it's important that I didn't say democracy although I'm certainly biased towards democracy. But there are lots of ways to be a citizen in a micro or macro way. And there are lots of ways to organize a country. I'm biased towards the U.S. brands since that's where I live and that's what I believe in. But I am also mindful that U.S. based companies, particularly on the Internet, are exporting a U.S. framework for speech, for societal norms, and for community citizenship. Ones that aren't directly at odds frankly with other parts of the world. And again with trying very hard to not be pejorative about other, other organizing constructs, we are in tension with and in the Internet space we are in frankly a major conflict about how the Internet will be governed and managed and whose values will prevail. Will they be ones of democratic openness or will they be a more top down government directive. And I'm not quite sure we have won that battle yet.

RS: A machine may know more about how you should vote than you do yourself?

NO: I think it's a possibility that the machines will know more about a lot of things. I'm not sure what the words should be, right? At the end of the day the should is what you think should be and hopefully you still maintain agency and dominion over your own brain. But there are lots of examples of where machine learning has improved decision making and outcomes. The example I would use many, many years ago when working for a large U.S. conglomerate I spent some time in the health care division and they were testing any manner of automated decisioning to help doctors make faster, better, and more judicious decisions about patient treatment. I ended up becoming a guinea pig of this treatment program when I developed some symptoms and some complications, I think I was about six or seven months pregnant at the time. And within minutes the doctor was able to triage my kind of history and what I was feeling and the likelihood in terms of percentages based on the-- based on again information had been fed into the algorithm, what I was likely experiencing and what the outcome would be. It was a breathtaking and riveting example of how bias could actually be ferreted out of decision making because as a articulate English speaking female I may have been able to explain my symptoms better than a non-native speaker or a person of a different color who was perceiving bias from a doctor which has been demonstrated over and over. And so in that case it may have been a way to enable equality in outcomes and faster and frankly lower cost treatments over the total of the US healthcare system. So the real question is what are we trying to solve for. That's what I always ask folks who are creating things. What's the problem you're trying to solve? And I remind people and in a more trivial example you know I'm a huge fan of some of the driving apps and the online travel apps that get me where I want to go faster to the point that I'm thinking I'm starting to forget street names and how to read a map, right. But they're only solving for one thing and it's speed. You have to understand what your app or what your algorithm is solving for. It's not solving for the scenic route. It's not necessarily even solving for no tolls or yes tolls or whatever unless you program it that way. It's solving to get me from point A to point B as fast as humanly possible within you know speed limits and normal lights and whatever. I think disclosure and transparency are not necessary about the ones and zeros but about what are we trying to solve and what are the values we're trying to solve for. Let me rant a little bit more about one example in terms of news and information feeds on social media or other platforms that are designed for perhaps louder, faster, better. Or more of what you like. More of what you like is probably pretty good if you're trying to provide advertising that gets you more advertisements for red sweaters. If you've been searching for red sweaters and you're more likely to buy a red sweater that's terrific from a commercial standpoint. That's a value proposition that's woefully mismatched to informing people about democratic processes. It's just a mismatch. And the first step I think is to own up to what are the embedded values in the algorithm. Setting aside even bias or unintended or implicit bias. What are you trying to solve for? And what's the point of this query? And if it's more stuff or more content that matches what you've searched for before that's confirmation bias. But it's not opening up the aperture of your understanding of your democracy.

RS: Facebook. What does it mean to be a good citizen?

NO: I think the first step is recognizing what you have wrought or what you have created. I think your question-- I think the conversation we're having right now reflects the fact that many platforms, many online spaces have morphed from a purely frivolous or trivial or social context to a much more meaningful information sharing platform. That's a good thing, that's, that's some power and also some responsibility. The challenge I think for private sector actors in this space is that your first order interest is to serve customers and shareholders and create value in a-- in a capitalist system. But there are second and third order interests of your customers. And if one of the second or third order interests is creation of democratic values and furtherance of true and accurate or productive discourse or values that align with democratic values and institutions either in the United States or elsewhere, then the a lens and a real kind of microscope has to be put to the output as well as the inputs of the algorithmic decision making of the AI. I think it's fair to say that many, many of us, many people in tech policy and in the Internet space were slow to recognize the shift. But not everyone. There have been people calling for greater scrutiny of and greater accountability starting with transparency over how systems and major networks are programmed. And I think social and human rights constructs and values need to be embedded not only after the fact but really from the very beginning of software development life cycles and product development lifecycle. That is some of the great work, some of the greatest work I think we do.That is some of the greatest work I think we do here at the Center for Democracy and Technology is help companies who are really struggling with what is the impact of this thing that I have created or that I'm about to launch. How is it really going to be thought about or used or received? I think there is a long thread of people creating tools of any kind, offline or online, and having those tools used for malicious intent. That does not necessarily mean that every product is bad or every use is bad but if there are reasonably foreseeable or large scale trends towards use that is harming the democracy or harming individuals or the big trend I see also is resulting in offline harm stemming from online conversations. I think there is some social responsibility, some corporate civics responsibility for Internet companies just as there has been social and corporate and moral responsibility for industrial companies in the industrial age. I worked at General Electric for a long time and I'm very proud of the service of a company that is in a very different place right now. But they were an industrial company and they had to clean up the Hudson River and it was a predecessor I think of many, many decades before any of us had worked there but that company was held to the fire to say your predecessor did harm to this environment. What are you going to do to fix it and how are you going to make it right? I think that some of the same questions we need to ask in the digital age about data and information. 

RS: Citizen of the world?

NO: Well I love the idea of it and I have long said that the Internet has broken down geographic barriers, has broken down traditional constructs of government and nationality in potentially very positive ways that I can connect with my family in Northern Ireland. People across the world can collaborate on business or other initiatives that again levels the playing field and creates connection and community across geographic boundaries is a I think a positive and a healthy thing. It's also scary for people who run governments and who want to continue to maintain control over their geographic boundaries and citizens within that limitation. And that's I think part of the-- the real tension we're seeing right now with data localization laws or mandates that the Internet must be stopped at the borders of a particular country. And I think the really existential crisis between the U.S. and China on whose norms and governance-- governance will govern the Internet and how people talk and communicate. And again I don't think that's a settled answer that the US will prevail or that there will continue to be one Internet. I think that's the thing that, that's the, the idealized goal that may somewhat be in peril as more governments feel threatened by the extraterritorial reach not only of largely US based multinational companies but also of their own citizens to get information and reach sources of power outside of traditional government structures. So I think it's a huge source of tension right now. I don't think it's a settled question. 

RS: Social platforms have more citizens than any country...

NO: Well I saw a statistic and I'm gonna get it wrong but it was citing major world religions that were much larger in terms of population than any one national country. And I think there were you know Christianity and Buddhism and Judaism and Facebook. Right. So Facebook alone had 2.5 billion users. I think last year. That profoundly destabilizes traditional notions of who is governing. It also begs the question of who's governing because the private sector actor in this case, the platform creator, gets to set the rules and rightly so it's a private sector actor. But when you are setting rules and norms about how 2.5 billion people communicate with each other that is a very, very heavy burden to bear. And also you are crossing countries and languages and norms and ways people relate to each other that may be very different from our own coming from the US and may not be fully understood. It's a weighty, weighty responsibility and I do think people have asked me do you think things feel different now in the tech sector than they did a few years ago. And I think without question there is no one I talk to who is not thinking very hard and very seriously about this responsibility. I think there are a lot of different answers on where this is going to end up and how we're going to get it right. 

RS: Is AI a resource for good citizenship in the future?

NO: I'm not sure AI is either necessary or unnecessary, I think it is inevitable. I think the onward march of technology barring some massive catastrophic environmental effect, we should all be worried that the power grid is going to go out because this is all going to be irrelevant if we can't communicate with each other and then we might actually have to go back to reading books on paper and talking to each other again, has, has some benefits as well. So I do think it's neither a-- it does not necessarily have to be a positive or negative but it is, it is largely inevitable in many walks of life at least in advanced economies like the ones we're talking about.

RS: And inevitable because this sort of technology is just embedded itself in everything. 

NO: Yes and I think there is an insatiable desire on the part of creators and also individuals to advance utility and efficiency and experiment with these great technologies, hopefully to make people's lives better. And I think that's something we can remember is that there are people working on these issues hopefully for the greater good in education and health care and in government. The challenge is making sure that we all agree on what that greater good looks like.

RS: Engineers-- taking into account the moral component?

NO: More and more you see a push for and we at the Center for Democracy Technology have also advocated for greater ethics training, greater understanding of the moral responsibility of the creation of embedded technologies in people's daily lives. And even as a first step greater transparency about how these technologies work, how they are embedded, where they are embedded, the decisions and the outcomes that are sought and that are actually occurring. So they're, you're 100 percent right to say that engineers have been creating new technologies for as long as there have been engineers. I think it's no longer satisfactory to say well I just built it. I don't have to think about how it's going to be used. I think that's an inadequate answer for anyone who creates anything whether it's a house or a family or a device or a car or thing. I think you are responsible for what you put out into the world to, to the extent that you are able to at least articulate that it is there to serve some greater good.

RS: You think human decision making or algorithmic decision making is more likely to maintain moral truths?

NO: At the end of the day I think I'm maybe even a Luddite when I'd say I still have my, my bets on the humans to get this right. We are though, our algorithms are and devices are a reflection of our own bias, right. So they will reflect the moral code that we embed in them so it doesn't necessarily have to be a better or worse, but I still want the, I want the br-- the human foot on the brake for a lot of these devices.

RS: The positive aspects?

NO: I do think profoundly the ability to access and interpret more information more quickly and more openly than we've ever had before is one of the biggest benefits of Internet access and Internet enabled technologies. I think to the extent that we can ferret out bias in our human decision making as well as in our artificial decision making that the combination of human and machine learning and engagement can benefit the greatest good and the greatest number of members of society. The examples I think of are both in education and in health care. There's a lot of fanfare around education technologies being used to intervene, in the third grade if someone scores this score on their math test they're at greater risk of dropping out of high school and if we intervene early we can keep them in school and create a more literate, more engaged, more participatory workforce an electorate, that's great as long as that's how those test scores are going to be used. If it's going to on the other hand be used to say these kids don't deserve a shot at high school let's just let them linger and languish over there while we focus all of our resources on the top 10 percent or whatever, that's not a decision point that I would be signing up for as a, as a human being as a member of a society that would like the, the majority of the electorate to be informed and engaged in our democracy. So it's not nec-- so I think the combination of having a teacher who says there's a tool here that says Johnny is at risk because his math scores are going down or he needs a little extra help but also the wisdom of as I always use the librarian who says yeah you know that algorithm says that they, you know, that that girl is 6 and she likes pink so she wants to read Barbie books but I think she's really into dinosaurs you know where I really think I'd like her to read Nancy Drew or Jules Verne or something else. I do not underestimate the ability of human beings to have a positive influence on each other and their worlds. 

RS: Making sure citizens are getting the best information?

NO: I am-- I remain a big fan, I think it was Tip O'Neill who said you're entitled to your own opinion but you are not entitled to your own facts. And I think the onus is-- and that's where I would put some responsibility on the purveyors or platforms or others who benefit from the dissemination of news and information to be editors, to be better editors, better stewards of information. I hope we are not living in a post-truth world and I do think that facts are knowable. And it is-- there are some scary, scary themes about how people are receiving or interpreting. I do think some of the companies and platforms are experimenting and I mean that in a good way with how to help people be exposed to more information without forcing them to recoil and burrow further into their own bias. In addition to alphanumeric or news or written word, we also are about to enter a world where visual and photographs and images can be distorted and disseminated in a way that looks again shockingly accurate and real but not be so. And so I think we are really going to have to reshape what looked like journalistic ethics and morals and standards and, and think about what our job is not only as the receiver of news and information but those companies and institutions that are disseminators hours of information as well.

RS: Does successful decision-making rely on diversity?

NO: I do think better decisions are made, I think there's actually a lot of research to show better decisions are made when there are competing-- not only people of different experiences or categories of humans or whatever but different viewpoints and vantage points. I think you know I love the stories about Justice Scalia and Justice Ginsburg who were at different ends of the spectrum on the Supreme Court but who engaged in really wonderful debate and at the end of the day were able to go to the opera together and we're able to socialize together because they had such profound respect for each other's intellect. I think a healthy respectful debate is far better for whatever the outcome is and if it's product development or, or computer code or whatever, having people's stress test it by having a different view of the world than you as its initial creator at the earliest possible moment I think results in a far better outcome, a far more inclusive outcome than releasing your product into the wild and saying oops, didn't realize I was gonna do that. You know there's been a culture of beta testing in Silicon Valley and I think as we recognize the consequences, the social consequences of some of our devices and, and items in the you know in this economic world, there needs to be a little bit more thought before things are launched.

RS: How do citizens acquire wisdom and change from one generation to the next?

NO: I think this begs the question not only of national heritage and identity but also of education and I think, I worked in education for awhile and people are always surprised to see how recent kind of public education and public norms about an informed electorate are and I think we're at a point, we're at an inflection point in the United States about national narrative. And I know that up close and personal from having a teenage daughter who lectured me at Thanksgiving dinner about what Thanksgiving really meant in the context of American history and why it was not a good holiday that she was not going to celebrate. I as an immigrant you know who kind of fiercely embraced the U.S. flag and freedom and opportunity had a slightly different view perhaps reflective of my age as well. But I think it's a worthwhile and although difficult conversation about how we have a national narrative about not just the founding but who we are as a country, wherever, whatever country you're in, that is at once respectful but also not bound by its history and inclusive of diversity. And then how do we inculcate that respect for diversity and also pride of place in a national education system while also respecting that there are, there are great differences in people's attitudes about the subjects that we think of as core curriculum. I will say one thing I think that we, we need to think about science curriculum and education for our young people and whether, and I don't necessarily think everyone needs to be a coder, that is definitely, that that's the rage right now and I think it's terrific and I do think we want, want more women and people of color coding, but I think all citizens need to be educated and canny consumers not only of the information they see online but of the actual technology itself, simply even knowing that the algorithm could have bias or that this device that you're putting in your house might be collecting information about you and use to make judgments about you and your next purchase or your next interaction with it. I think we need more awareness. And when I say media and digital literacy I don't just mean about the content, I mean about the consequences of what we are adopting in our daily lives. 

RS: How do we reinvigorate citizenships with cross currents of digital currents?

NO: Well here's the good news. I think we have. I mean for better or for worse my daughters, my teenage daughters have been on more, more marches in the last two or three years that I've been on in my entire life. So it seems to me that while yes there's polarization, there is also a great deal of energy I think on both sides of the political spectrum and hopefully maybe in the middle as well, people are realizing that I think we have taken some of our fundamental institutions of democracy for granted. I think there is an energy around hopefully not only federal but state and local governance as well and a breaking down of a perceived barrier that it's always the province of the rich or the wealthy or the white or whatever or the male. And I think we've seen some of that in the most recent round of elections in Congress. But hopefully we're seeing it at the local school board level as well. And I am-- I've said this a million times, short-term pessimistic and long-term optimistic that we will get this right. But I think it's taking all hands on deck from whatever vantage point you're in whether it's government, private sector, individual, consumer of information or just simply a citizen.

RS: Getting it right means what?

NO: Creating-- sustaining the thriving democracy that we all thought we had.

RS: Which you feel is under threat?

NO: There is no question in my mind, democracy as a construct is in peril not only in the United States but around the world. All evidence is that in times of crisis people want a safe and secure, in some cases ruler and the questioning that I think we all have and I think it's always healthy to question your structure, your standard of governance, but I'm, I'm hopeful that we agree that an open and representative democracy is a healthy and productive form of governance that brings the opportunity for equality for all people.

RS: Does technology help us balance that dilemma between hope and fear?

NO: I think it can. I do think that many of our uses, all of our uses, have not necessarily called upon the better angels of our nature and that, and again that's some of the implicit, not just bias but, but I think weaknesses of some of our programming and our algorithms is that it has rewarded louder, faster, better. Or rather it has rewarded louder, faster, and not always better, speech.

RS: Do you with technology have an on-off switch?

NO: It does, technology actually does have an on-off switch. That's actually I can start on my rant about setting boundaries and I see this I again, this is probably why I'm long term hopeful. My friends in the tech sector have already taken maybe somewhat draconian or drastic steps to turn off. If they-- had some of them have the luxury of having a weekend place away from a city where they literally have no Internet service or forcing their kids to turn off their devices for an entire weekend. On and off I've tried something called the digital Sabbath which I think is a remarkable construct. And when we have held true to it, it has allowed for peace and space and dialogue in my household. And even if it's just turning off social media and phones for a day, it has really been a remarkable step forward for our family. We have smaller boundaries around no devices at the dinner table and no devices after bedtime and all that good stuff. But I think the onus is on us to take back our control and I am seeing people do that. My nieces at university have a thing called the phone pile. When they go out to dinner everybody throws their phone in the middle of the table and they don't look at it again until the meal is done. That's bringing back conversation and respect in a way that I'm super hopeful to hear from, from a bunch of teenagers. 

RS: Do human experiences help us see things that are good that computers will never be able to see?

NO: Yes. And this is not my original thought. I understand from some of the research that there is a power of forgetting. And I think there's a book called delete. I'm not sure. I could recite the author's name. But the constructs that the human brain is somewhat wired to forget the bad. And remember the good. A healthy human brain anyway. Whereas online everything is remembered forever and there is no forgiveness. Your pictures and your insults and your words from when you are 12 are given the same weight online as the words when you're 22 or 32 or 42. We're seeing that now in some of our political discourse and things that are being brought up from people's lives which I'm sure many of them would like to have forgotten, and I'm not excusing bad behavior for sure, but I think that there is a difference between the all flat every letter every email I've ever sent is going to be remembered forever and I'm going to choose these five love letters because they bring me joy and I'm going to remember those and keep those under the bed. So I do think that maybe we need to embed some-- some permanent forgetting into our algorithms as well.

RS: What citizenship looks like in the next decade or two?

NO: I think this is as I said before, all hands on deck for not democracy but for citizenship and whatever structure you're operating in. And I think that each of us in our actions both micro and macro, whether it's what I choose to read and how I choose to spend my time, or what I choose to forward and share with other people have to think about the responsibility we have to the individuals we are in communicating with but also the social construct that we are a part of. I think time is now becoming some of the new currency and you're seeing that from platforms that want to stay on longer or create sticky environments. But I think time, especially in the digital age, is one of the most important assets each one of us can choose to spend and I hope we all spend it wisely. 

MW: I’m Michael Weir, founder of Public Square Strategies, LLC, and former religious affairs director for President Obama's reelection campaign and I worked in his White House office of faith-based and neighborhood partnerships. I was an executive assistant.

RS: How do you define human flourishing?

MW: I think there are many components to human flourishing. But I really think predominantly of two. The first is the ability to live within integrity, the ability to be an integrated and honest person, and then second would be the ability to create towards the good of others, the ability to actually create in a way that helps others flourish. And those two together I think help-- help make up a pretty good model of human flourishing.

RS: When we talk about human flourishing what are we missing?

MW: I think -- depending on who you talk to, I think often we think of human flourishing purely in the material sense. But purely in sort of as just another way of saying human success. And there are a lot of successful people who aren't flourishing. There are a lot of people who have access to material goods that aren't flourishing. Certainly the material was an important aspect of flourishing but really the material is, is important to flourishing to the extent that it allows someone to create and it allows someone to be a person who is living with integrity which you know feelings of scarcity, feelings of fear, are great pressures on being able to live with integrity and to be honest and truthful.

RS: Is the removal of suffering equate to human flourishing?

MW: No, not by, not by any means necessary. The just removal of suffering is an act of justice. But there are unjust ways to alleviate suffering through obfuscation, through manipulation, and those only com-- compound the problem. Those don't lead to flourishing at all. And so, and so it's thinking about the means of removing suffering which in this world is sometimes just not possible to remove suffering in a, in a just way, especially in the immediate. So no, I wouldn't equate the two but certainly a lack of suffering is a component of flourishing.

RS: What does it mean to be a good citizen in the 21st century?

MW: It's-- it's a very good question. I think there's a-- a spirit of neighborliness. I think the ability to listen. The ability to reach outside of self or parochial interests and tie your fate to those of, those that you're in community with. I think an ability to promote the affirmation of human dignity and advance justice and not seek politics and that seek the public realm as merely a place to go to seek self affirmation and self realization. It sounds obvious to say but citizenship is-- is not an individualistic endeavor. And it's not supposed to be only internally focused. Certainly as we enter public we have even a responsibility I'd say to represent our own self-interest but we must conceive of our self-interests within the broader whole, entering the public in a way that is solely about the acquisition of power for yourself or for your team, it's highly destructive. A lot of the forces of 21st century American life leads toward exactly that, leads towards a sort of insulated, self-interested form of civic engagement. But the ideal citizen in this age will be able to build up the sort of internal resources to transcend that. 

RS: Not individualistic -- communal based?

MW: Essentially I mean if I was going to, if I was going to-- speak in religious terms I've been thinking a lot lately about the Apostle Paul's letter to the Galatians. It's a book in the New Testament. In that book Paul is writing to essentially a polarized community. He says, he goes through, the community was dealing with false teachers, the people in the community were kind of pursuing their own ends and arguing with one another. And he says this amazing thing that just sort of goes against every impulse that I think we have in this age. I mean right now, think about the advice you give to the American public that is bitterly divided, tribalistic, and a lot of the advice you hear is that they know we need to sort of separate them. They need to kind of find out how to just not kill one another. What Paul writes of the Galatians is that he says they ought to bear one another's burdens and that in doing so they'll show the love of Christ. There is a-- now he's writing to a Christian community that is bound not just by geographic location but by a shared faith commitment. We have a social contract here in America. We have, just by nature of liberal democracy on either side of the Atlantic, we-- we-- there is a common obligation we have to one another and it's not just to live together, it's not just to ideally not just even work through the political process to acquire the most power we can and try and get the most we can for ourselves, but it's actually to see in people who politically disagree with us, people who also deserve to be heard, people whose interests deserve to be respected by the political process. And so the ideal citizen will find a way to invite the best expression of even interest that they disagree with into the political conversation because they realize that politics is not just about them. It's not just about what they need. It's about the community together.

RS: How has the definition of citizenship changed?

MW: It's an interesting-- interesting question. You know, I think, I am not sure about the political responsibility. That I would express in the American context or in the modern sort of liberal democratic context is something that transcends sort of time and space. I think other forms of government, even other times may require different things from us. In a representative democracy you don't choose to have political influence. You're invested in it just by virtue of being a citizen. You have political responsibility and the only choice you have is whether to steward it well or not. And so I think in the modern era there is both a, and increased sort of distance and the ability to withdraw from the political process, the political process I think seems removed from broad swaths of the public. At the same time sort of responsibilities as direct as ever given the sort of innovations of democracy that allow for really clear ways of input from the public into the political process.

RS: AI algorithms -- how is it impacting citizenship?

MW: Absolutely. I think-- if we talked about the two ideas of-- or sort of aspects of flourishing I discussed earlier, sort of the integrity of the person and the ability to sort of create for the good. I think the-- on balanced technology in the AI age has provided great capacity to create for the good. I think there are obviously some-- some negatives there. It's also obviously increased our capacity to create frail-- to actually use the systems that we've built to promote some but not others. I'm even more concerned though about that question of integrity. The way that our networked age allows a sort of feigned knowledge, a sort of feigned community. The ability to hint at personal understanding without the actual relationship being there. To be able to break down people into a number of decision points but never really being in the flow of their lives. We see this in politics. We see this just in our social media lives and that has impacted our most personal relationships. And it leads people to both strive for that connection. But you know through a form that that will never lead us to the peaks of any sort of human relationship. And so there is a real, there's a real tension there and that undermines the integrity of the person. To be seeking something through a forum and through a medium that at the end isn't going to be able to facilitate that absolute connection that we desire.

RS: The upside of the Internet was the sensibility of the creation of a global community. In some ways have created an echo chamber?

MW: I think that's very much an aspect of what I'm talking about. There is a way in which this age has helped people to find communities they would have never found before, to find people that they view as like them in ways that have never been possible. I think what happens is that there are forces and interests that see those very communities building and are finding ways to manipulate them for their own purposes. And so this thing that feels very personal as we've seen with you know, in politics with sort of false information being put out to sort of promote these sort of tribal controversies that take on a life of their own even though they're not grounded in fact but sort of the facts don't seem to matter when it confirms your own identity and the deepest things that you believe to be true about the world and even when they're corrected it's like well, well that instance wasn't true. But the point it was aiming towards was right. So even, even if, even if the facts don't match that the aim of the lie was correct. And again that leads us to feel we're constantly exposed to the potential to be manipulated. We're constantly exposed to the potential even, even if it's hidden to being used, to having our deepest affections used for ends that maybe wouldn't be our own if we knew everything that was, that was happening. Now I don't want to deny the power, there are voices that could have never been heard 20 years ago that now have an ability to impact our democracy, have the ability to impact public conversations, culture, entertainment, media, news sources, in a way that would have never been possible. You can organize people to speak out against a news story you didn't like or that you thought was unfair, that you just simply would not have been heard if the ombudsman with an opinion on it that it would be it would be more difficult, you could write to the newspaper but man to be able to instantaneously organize people and tweet at the editor of a newspaper is a, is a powerful thing. And the way that that shapes news coverage, the way that it shapes, think of all the controversies we've had, social media sort of network driven controversies around advertising and the immediate sort of pushback that certain, certain companies have received and the almost immediate response they've received. So there's a certain power, a certain democratic sort of aspect to technology. It's just opened it up for a whole bunch of people but, but again those very same tools that provide democratic access are also used by interests and in organizations with interests to manipulate people's affection. 

RS: How does the network age, Nazi propaganda, how does that distinguish itself from what a networked world looks like today for citizenship?

MW: I think it's able to spread faster. It's able to spread with, without the sort of gatekeepers that had to be weakened and undermined in Germany but into-- in a networked age you can just simply go round the gatekeepers in a way that that, that wasn't, wasn't as possible. You had to be the gatekeeper in order to manipulate people in the way, in the way that the Nazi regime did. Now you don't, you don't-- so in political communications there's often conversation about sort of going around the messenger. So, why even communicate through these, through these gatekeepers. In political cases they're mostly talking about journalists, when you can reach the voter directly with a message that's directly tailored toward them, based on what. Well not personal knowledge, often. Not personal inter-relational knowledge but a series of decision points they've made in their lives. What magazines they subscribe-- they subscribe to, what their income is, what their racial and ethnic background is. And so that, that again feigns that kind of familiarity that is then leveraged for not just relational understanding but towards a, towards a specific end.

RS: Up and down side, speed and access?

MW: The upside is, people are able to be seen in ways they wouldn't be able to be seen in an increasingly sort of, well, in a sort of sprawling sort of democracy in America you know 360 million people. Through the use of technology there's ways for the citizen to access decision makers and ways for decision makers to get some sort of sense about who they're trying to reach, about who they're representing, in a way that just wasn't possible before. It's no longer an age of just doing focus groups with 20 and extrapolating that onto an entire public. You can now get individual information that's based on an interpretation of reality and interpretation of the facts, that allows a responsiveness that just wasn't wasn't possible before. That's, that's something that pushes us towards sort of a greater democracy. And I think that the downside is sort of the inauthenticity of it all. There can be a dehumanization that takes place when people are sort of boiled down to the data points that are available to decision makers, which isn't all of them. So we're actually sort of creating profiles of people and I'm not just talking about politics and talking about advertising, I'm talking about marketing, I'm talking about, I mean this has reached into medicine. I mean this is reached into our-- this is modern life. There's a dehumanization that takes place that can, that can just sort of throw things off, that leads people to, to not be dealt with with integrity and so it's very difficult to respond with integrity when so often the interactions that we receive aren't dealing with us as persons but has a set of data points and that's, that contorts sort of the, the human sort of image or imagination for, for him or herself.

RS: Will there be a day when AI will know me better, the vote I should cast rather than the vote I am casting?

MW: Well, so I think that is the fundamental question. In the sense that, to say that means to, to view politics as simply a consolidation or sort of an expression of, of one's personal circumstances and one's own personal expression. And that's the way in which this technology can actually undermine democracy. When we don't see people, when we don't even listen to people tell us who they are and what they support and instead say well, you know, the data kind of shows that this would be good. Now there have been expressions of this in the past. I mean Thomas Frank wrote a book called What's the matter with Kansas, which basically suggested that broad swaths of voters in Kansas, sort of rural America were voting against their interests. Of course Thomas Frank had a very different conception of their interests than the voters did. And yet the response to that was not to say well politicians should respond to the interests that these voters are expressing, it was how irrational these voters are. What do we even do with them? Technology is kind of compounding this problem in that we're able to break down by tenths of percentage points. How likely voters are to vote for your candidate. And those that fall under 50 percent or whatever, whatever the campaigns have different ways of choosing who is a persuadable voter. Those voters are treated with less respect than attention. It's a-- now again the pro argument for this is, is that you're able to, to the extent that the algorithms are correct, it leads campaigns to talk to the voters in a way that's sensitive to their needs that they may have never found before and may have never heard from a campaign before that would have never been invited to participate in the democratic process in the same way. But, where this can lead and where it is in many aspects is a dehumanization of the decision making process. So I give an example in my book of a, a campaign I was working on was doing fundraising emails. And came to, there was came to an understanding that well, so that I noticed that the campaign started using curse words in the subject header so some of their emails and I worked with religious voters at the time and I had a very elite national religious leader reach out and say I support your candidate but I can't forward these emails if they have curse words in them, I'm a religious leader and so I sort of ran that up the flagpole and the response I got wasn't wasn't a thoughtful sort of explain-- you know we, we weighed the, the values and the character of the of the president. This is really in line with the mission of the candidate. Instead the response was that through their-- the data that was coming in they could see that one curse word was used in the subject line subject header of emails that there was an increased open rate to the emails by a percentage point or two. And so the decision was how do you argue with that. If you're a campaign with the objective of getting people to open and follow up on your emails, what tactic that helps you achieve that goal could possibly be off the table if your cause is just. If your cause is right. And I'd say we need to have other metrics involved in the mix than just the utilitarian, does this help us reach the short term goal that's directly in front of us. Technology can make a see out father, it can help us see people that we would have never seen before. It can also close in our vision. It can also put the top on our conversations and lead to a sort of enclosed sort of thinking that doesn't allow us to look up and think of, think of higher things. Think of our, our better angels.

RS: Moral conscience?

MW: Think, I think it means a lot, a lot of things. I think, just to sort of build on-- a citizen with a moral conscience will say no to things that are against their personal interest or yes to things that are against their personal interests because it is in line with a moral code, because it is in line with what is beneficial for the community, in line with the common good. To be a citizen with a moral conscience means that you don't just go to politics seeking the maximum material gain in the shortest term possible. But that delayed gratification is a part of the political process, an adjustment of your will to be closer aligned with the good of the community in which you live. These are the aspects of moral conscience for-- for a citizen and one of the ways technology, technology I think is generally neutral on this question if you leave out the interests of those who hold the greatest power over it. I think technology could be used to promote greater self-sacrifice among citizens. The problem is that we have a political system that is built on gaining the approval and support of citizens and self-sacrifice isn't exactly a great winning message depending on the character of the citizen. And so we almost have a chicken and the egg problem where citizens with a moral conscience will reject politicians appeals to their baser instincts. But politicians rely on the soul, the souls of the people, they serve the conscience of the people they serve in order to determine where to go and how to lead and what's possible in leadership. And so I've come to the conclusion that there are a whole range of structural sort of technocratic things that we could do to help nudge the system in a better direction. But without a reformation of, without a sort of strengthening of the American civic character and civic character of individual Americans, the structures are actually responsive to that. And little tweaks will not be able to withstand. The actual desires of the American people. So for instance we, we know through data, through, through A.I., we know that A, the American people say that they don't like negative advertising and B, that negative advertising is far more effective in political campaigns than positive ads. And so it seems as though the American people have a crisis of conscience in that area and so many others. Where they know what they ought to desire and yet they can't help but be swayed by those baser things. 

RS: Is the change to online communities helping or hindering people's development of their moral conscience?

MW: I really do believe it's both. I believe that people have been made sensitive to things, sensitive to the conditions of their neighbors that they would have never had access to. That they, and if they had access they would have never been presented with repeated opportunities to engage that information. And now, many of them are, certainly there are stories that don't get heard, interests that don't achieve enough play to impact the political conversation or the social conversation significantly enough. But I do think there have been tremendous sorts of gains through technology of developing people's moral conscience. On the other hand, again we've talked about the ways in which people's sensitivities and proclivities and, and preferences could be manipulated to sort of activate their conscience for insincere reasons. I think the short term effect of that is either a, heard of an echo chamber mentality where you shut yourself, it's just too, too disturbing to think that your conscience could be so easily manipulated and so you ignore sort of contrary facts. Or a sort of apathy, just a sort of you know, I get, you know I get fired up about something that seemed like it was true and real and important and eight hours later I found out it was a hoax. And that happened, that's happened you know, you know for an individual that kind of happened three times in a week at some point the desire to be civically engaged, the desire to care about those around you, again you either, you either have to choose what side you're on and close yourself off, even the contrary facts. Or all the contradicting information and the, the constant sort of selling to the conscience, the American conscience leads, leads to apathy. In the long-- so you know I think that's in the relative short term, in the long term you, you could see a legitimate, well you could see an undermining of the very legitimacy of, of democracy and of our, of our, of our system. Of how we think about what our-- our decisions with capital, how we think of our political decisions. How we think of our technological decisions. I mean I think it's, there was an interesting New York Times story recently that pointed out that, the rapid rise of wealthy school districts promoting access to technology, laptops, smartphones for every student, that then reached to poor school districts in order to keep up. Now the wealthy school districts are actually doing the opposite trend. Now it's considered to be an elite way of living for, and of teaching kids for them they have no access to technology during the day and it just, it just leads to this sense of, there is a growing sense that the very things that we thought we could use to strengthen us are actually undermining us and now we're trying to in some ways find a happy medium some ways there's a completely alternative reaction which is how do we wean ourselves and isolate ourselves from, from the network as much as possible.

RS: Self-giving in the network age?

MW: It's a great question. I think there are many avenues available for that. I think part of it is ensuring that on the user end that we're opening up both inputs and sort of outputs. Sort of what we're putting into the network and what we're getting out of it. That we're incorporating perspectives that are different from our own, incorporating interests that are different than our own. Social media, this is actually you know, relatively straightforward. You can manage who's on your feed, who's on your timeline, so that it's not constantly affirmative viewpoints that only confirm how great you are but that you're actually following a diversity of people. Just to continue with social media, are you only using social media to market yourself and market your products and market your work or are you using whatever influence you have whether it's you know, whether you are on social media for the seventy-five followers that you know personally and your friends or you, you know you have a platform are using it in a way with integrity promoting the work of others and giving other people a place in, on the platform. And I think that those kinds of lessons apply somewhat differently but those types of applications have-- have expressions in, in other areas of technology.

RS: Is there a common good?

MW: I am-- The idea of the common good. It is not that there is, especially sort of, in a world like ours with a politics like ours, the idea is not that there is one sort of perfect utopian setup that advantages and disadvantages every-- everybody equally. I think the-- I think the problem, to move from the sort of material to the to the to the spirit of the endeavor of the enterprise is that we've ceased even looking towards the common good. So you know there's an idea in reformed theology, public theology, about approximate justice. Short of the idea that well we ought to accept that through politics we're never going to achieve perfect justice. but the Christians aim is towards getting this close as we can. That just because we can't achieve perfect justice, and frankly the-- the means that might be necessary to achieve perfect justice may not be just. The approximate justice to get as close as we can with this faithful means as possible is the aim. And as we've discussed so much today already, the personalized sort of commercialized uses of technology in our politics and in our public life more generally, because right, this is-- this is important. There's been an understanding in some quarters that politics is sort of downstream from culture. What's really important to understand is that it's well it's bad to think of it as a stream, politics feeds into culture and culture feeds into politics. They're all pulling on each other. And so the appetites that are built up for us in our politics affects the way that we purchase products, the way that we treat our neighbors, and the appetites that are built up for us within our relationships, our churches are through, through culture affect the way that we express ourselves in politics and so many of the forces are leading us to think in a self-interested way first in a way that's detached from community. And when we think about sort of the collapse of marriage, when we think about the increasing ability to get products we need without interacting with people. I mean so, in so many areas of our life the self-interested and individualistic are being affirmed and that, that leads to real problems when you then try and take those same people into a political realm that has to be in, that has to be focused on more than just the individual in order to, in order to function properly.

RS: What if AI does our moral reasoning for us? Or will it?

MW: You know, I have the idea that moral reasoning is just a matter of following a number-- sort of a set of dictates, isn't reflected in the Christian tradition at least. We are-- in Jesus you know you have, Jesus was often, tried to get, people trying to catch him up with sort of well you're not following the letter of the law and Jesus would kind of say for instance in one passage in the Gospels someone tried to accuse Jesus of breaking the Sabbath because he, he healed someone on the Sabbath.He said you're missing the whole point, you're missing the context of what's happening, you're missing the spirit of the law in which it was given. And so I think that we could absolutely have a more legalistic society through AI, I think we're actually seeing some of that now. But, but I think that that would cut out the role of things like mercy and compassion and forgiveness, that have doubts we'll be able to capture it in a way that, that is not legalistic, that-- and sort of you know in the Christian tradition-- morality demands more of us than legalism demands more of us of merely following the law. It's about the orientation of our hearts and AI certainly isn't going to do that for us.

RS: The Internet can be a way of bringing people back together?

MW: I do, and I think there are shining examples of where it's doing just that. I think sometimes it's harder for these to be financially sustainable. I think sometimes the energy can't-- sometimes not-- restore, like reusable, energy can sometimes drain out of it. But I think of, I think of things you know, I think of my, my friends ran a website called The Toast. Which has this sort of remarkably, oddly sort of wholesome Internet community for a lot of people who had felt cut out from society, a lot of people who had felt like their voices weren't being heard. I think we're seeing a lot of communities like that online. I do think that we're seeing the use of technology in AI to improve it, or to facilitate human relationships whether it's dating or just finding people with similar interests as you. I think we're seeing, we're seeing technology used to, so for instance there's an organization called Charity Water that has been sort of revolutionary in the international development space of providing donors from you know five dollar donors to six figure, seven figure donors, to the projects. So they drill water wells around the world, providing direct access and information to the donor about exactly what their money is funding which facilitates this beautiful sense of community that's not just sort of writing a check to people less well-off than you but you're actually able to see a community benefit from the-- from the well that your money helped to build in some of those things are beautiful. So yes, technology can be wielded for the good. And it is right now. I think to your question of how it will end up, I just go back to, it really depends on what appetites the people have. What they're most eager to say yes to and what they're willing to say no to. That'll determine the overall sort of fate. But yes there are bright pockets of technology doing wondrous things to bring people together and make people feel heard and to orient people towards the common good in ways that just weren't possible before.