Opportunity
Humans and our technology exist in a complex feedback loop. As we create new devices, they, in turn, influence us, both enhancing and circumscribing our life experiences. This loop is particularly powerful in the case of artificial intelligence, which amplifies our capacities but also offers the chance to abdicate some of our responsibilities, delegating choices to clever machines. Templeton World Charity Foundation will support research into this loop and the development of theories, models, and technologies that support human moral strengthening and advancement. We seek to support approaches that combat the possibility of humans becoming simply passive consumers of circumscribed choices generated by our own machine aids.
Research and development of artificial intelligence is richly funded by those seeking profit. Vast resources are currently deployed in search of “monetizable” theories and technologies, but there remains little direct funding focused on the how these innovations impact human agency, moral capacity, and development. This presents an unparalleled opportunity to leverage our resources by partnering with laboratories, researchers, and companies that have other funding for their technical work but that seek additional co-funding to support specific research into and development of approaches to enhance human flourishing. We anticipate that the bulk of grants under this challenge, therefore, will be highly leveraged against funds designated for purely technical aims.
We already use AI in many contexts with real human impact, ranging from immigration control in airports to determining power distribution in towns and cities. These uses are increasingly joined by far more subtle applications, with machine logic sometimes substituting for judgments made by humans, decisions that can carry tremendous moral or ethical weight. Among the most celebrated of these are algorithms that help governments decide which criminals are the most likely to offend again, algorithms which the U.S. Attorney General recently worried might “inadvertently undermine our efforts to ensure individualized and equal justice”. Ensuring just outcomes should be a prerequisite of all such applications, but because of the state of the art in AI, that can be challenging. With the economic efficiencies and predictive power that these algorithms provide, it will require diligence to ensure that moral human decision-making remains foregrounded – that it is fully embedded in and empowered by these new tools.
In some of the above cases, the machine age seems to allow us to “outsource” our decision-making. Of course, banal decisions such as what song will come up next on our playlist seem to lack moral or ethical content (but may, in fact, have subtle and important impact). But in more momentous situations we certainly risk ceding part of our inner compass. How do we ensure that the instructions and values that we program into these machines are consistent with our own, or that the choices we ask them to make on our behalf will be consistent with a benevolent outlook on the future of human existence? And, in a world of many (sometimes conflicting) ethical systems, how can we best elucidate the structure of that decision-making? These questions must be answered if we are to ensure that our innate capacities strengthen rather than atrophy.
Such questions have begun to be explored in the fields of computing, artificial intelligence, and robotics, and subfields of philosophy, theology, and the social sciences. The nascent study of machine ethics has begun to ask the question of how to implement moral decision-making in computers and robots. Some researchers, for example, have already begun to use stories to “teach” human values to AI systems to help them perform morally-neutral actions. We will seek such points of leverage where novel approaches offer particular promise.
The most promising approaches intentionally embed an element of human reflection and interaction – they seek to prevent us from becoming morally flaccid by regularly querying us and seeking ongoing “opt-in” or auditing. They try to combat the tendency of some of these systems to push human ethical decision-making to the background of our social-technological interface. Deep human feedback is crucial if we are to remain fully “in the loop” – exercising moral choice in the service of gaining ever-greater moral and ethical capacity. We will seek especially to fund those approaches that focus particularly on this “looping in” of human agency.
Such approaches highlight the very real possibility that we can learn to be better by creating AI tools that help us to more clearly see the moral and ethical quandaries we face and, perhaps, give voice to our own better angels. Because AI has the capacity to detect patterns and to perceive connections that often go unnoticed, such machines could, in principle, augment our own capacities and help us reflect, grow, and act better.
This sort of aid might be seen as an enhancement, a learning tool that extends rather than replaces our own perception, choice, and capacity. We challenge researchers to explore the possibility that, by dint of their capacity to digest much more information than humans can, for example, machine intelligences could expose conditions, situations, and opportunities that we might otherwise have missed, thereby enhancing our capacity for choice and moral, ethical action and attention to the most important aspects of life.
One of the grave challenges presented by all of these feedback loops is the opaque nature of some machine intelligences. Artificial neural networks, for example, function as closed “black boxes”. They clearly have rules that associate inputs to outputs, but current technologies do not offer human insight into what those rules actually are. The result is that we might build and rely upon machines to help us make moral decisions that only seem to be consonant with our expectations and ethical precepts. It would be valuable to gain deeper insight into the workings of such tools, to be able to interpret the rules that may be implicit in their inner working so that they become more transparent elements in the complex loops of human decision-making and social change. Some research on this is currently being supported, and Templeton World Charity Foundation will seek to co-fund projects in this area with particular foci on the moral and ethical contents of the inner programming of machine intelligences.
Terms like “moral” and “ethical” are laden with personal, cultural, philosophical, and theological content, and work in this area should take a sophisticated approach to the subtle issues that arise. We seek to engage thinkers across geographic and disciplinary boundaries and to incorporate their insights into the technical work we support.