Moral Skill and Artificial Intelligence
As humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using artificial intelligence (AI) to automate decisions—many with major moral implications. In other domains of human activity, automating a task diminishes our skill at that task. Will “moral automation” diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral “upskilling”?
Directed by Seth Lazar, this Diverse Intelligences project—a partnership with the Humanising Machine Intelligence (HMI) grand challenge at the Australian National University—will use philosophy, social psychology, and computer science to answer these questions.
The first stage of treatment is diagnosis. The project team will begin by identifying existing and prospective varieties of moral automation before exploring the philosophical and social-psychological foundations of the argument from moral automation to moral deskilling. In doing so, the team will determine just why, and how much, we should be worried about moral deskilling.
The next stage of treatment is mitigation and adaptation. The team will propose technological and institutional solutions to mitigate the risk of moral deskilling, but they will also argue that AI systems will enable us to adapt to the challenge of automation by morally upskilling in other areas. In particular, some measure of moral automation will free us up to pursue the morally most demanding aspects of our personal relationships. AI research will enable new kinds of moral knowledge and moral inquiry; and by affording us new understandings and capacities, it can make new kinds of moral behavior possible. Dr. Lazar’s project will produce scholarship of the highest order. But the team’s goals are not narrowly academic. Through the HMI project, they will translate their research to maximize its impact at all levels of society.