Launch
Jun 8, 2020

Technology Can Help Us, but People Must Be the Moral Decision Makers

As we grapple with a global pandemic, technologies such as AI and machine learning may help us defeat it, but we must make a moral decision about how our society will change.

By Andrew Briggs

The pace at which technology is changing the fabric of society is accelerating. Facebook launched in the United States in 2004. The iPhone did not exist prior to 2007. In less than 20 years the fabric of life has been radically changed by mobile computing, social networks, and more recently AI and machine learning. These technologies are frequently demonized. Developments in machine learning and AI are typically portrayed as steps along a path that ends with robot domination of humanity, a post-truth world where false storylines are pushed by botnets or an impoverished future where millions of people have been put out of work by heartless machines. This all sounds scary, but I believe that in fact machine learning represents a huge opportunity for the betterment of humankind. What must change to allow that is the way we discuss and think about technology as a society.

Machine learning — essentially an algorithmic process whereby a system improves its own performance through repetition — and the wider field of artificial intelligence have been highly divisive for many years. On the one hand, people from quintessentially geeky professions, such as materials science (my own field of research), computer science or medicine, typically embrace machine learning as an amazing tool for speeding up research and improving outcomes across a spectrum of industries. On the other, experts in the humanities and social sciences often approach AI and machine learning in a circumspect manner and are on the lookout for opportunities for abuse by corporations, governments and malcontents. As technology’s integration into our society accelerates, the debate over its moral use has become ever more urgent. With the sudden shift to a digital existence brought about by the COVID-19 pandemic, this debate is at risk of reaching boiling point. Fortunately, the best thinkers from either background—science or the humanities — recognize the importance of both points of view.

There is ample common ground, when the two sides are willing to listen to each other. Generating real dialogue — and engaging in active listening — between disparate groups around important technology topics, such as the role of machine learning in society, was a key goal of a recently completed report by myself and colleagues at the University of Oxford, which was sponsored by Templeton World Charity Foundation. This report, Citizenship in a Networked Age, seeks to understand what it means to be a citizen in an increasingly interconnected and yet simultaneously isolating digital world. An important finding of the report is that the disjuncture between technologists and social scientists can be overcome, because it derives not from a fundamental disagreement on principles, but rather from approaching the debate with different expertise and concerns.

Many individuals in technology fields see tools such as machine learning and AI as precisely that — tools — which are intended to be used to support human endeavors, and they tend to argue how such tools can be used to optimize technical decisions. Those people concerned with the social impacts of these technologies tend to approach the debate from a moral stance and to ask how these technologies should be used to promote human flourishing.

This is not an unresolvable conflict, nor is it purely academic. As the world grapples with the coronavirus pandemic, society is increasingly faced with decisions about how technology should be used: Should sick people’s contacts be traced using cell phone data? Should AIs determine who can or cannot work or travel based on their most recent COVID-19 test results? These questions have both technical and moral dimensions. Thankfully, humans have a unique capacity for moral choices in a way that machines simply do not.

One of our findings is that for humanity to thrive in the new digital age, we cannot disconnect our technical decisions and innovations from moral reasoning. New technologies require innovations in society. To think that the advance of technology can be stopped, or that established moral modalities need not be applied afresh to new circumstances, is a fraught path. There will often be tradeoffs between social goals, such as maintaining privacy, and technological goals, such as identifying disease vectors.

Delivering the 2019 Romanes Lecture at Oxford, Baroness Eliza Manningham-Buller, former Director General of the UK intelligence agency MI5, regretted the lack during her tenure of public debate over the value of privacy versus the ability of intrusive data gathering to forestall terrorist attacks. There is a constant tension between the inclination to gather more data so as to prevent attacks and the right of people to live their lives unsurveilled by the government. Unless we generate a vigorous debate now, the use of technology in slowing the COVID-19 pandemic may similarly lack civic engagement. To avoid echo chambers of vested interests, everyone needs to be able to participate in this debate.

This debate needs both technical and moral input. The old model of Homo economicus — the rational, selfish, greedy, lazy man — has passed its sell-by date. It is being upstaged by what I like to call Homo fidelis — ethical, caring, generous, energetic woman and man. The associated virtues can be fully exercised only through cooperating, in ways that have changed following the lock-down. If we get it right, the technologies of the networked age will provide new opportunities for Homo fidelis to promote human flourishing at its best.