What Buddhism Can Teach Us About Artificial Intelligence — And Ourselves

By Thomas Doctor and Brie Linkenhoker
March 23, 2023
As AI evolves, humanity must engage with its implications not just as a practical matter, but philosophically, morally, and emotionally.
What Buddhism can teach us about AI

Artificial Intelligence has the potential to be transformative for society and the flourishing of humanity. In some arenas, it is already beginning to change how we live and work. Repetitive tasks are being replaced by AIs, big data is analyzed in ways that even a team of individuals using conventional methods could not, the best human chess and go players lose to computers, and there is a simmering debate over the use of the technology to create art. Yet despite these uses, we are still likely many years away from a true, self-aware AI. This means that we have a unique opportunity right now to envision what a society with AI would look like and what it will mean for us. Too often, these sorts of exercises conclude with a vision of either total utopia or dystopia, rather than a more realistic and muddled future. Yet by using the tools and traditions of humanity — including religious and philosophical traditions — we can better grasp what a future with AI might look like and it could come to pass.

For the past two years, the Center for the Study of Apparent Thought* has explored how we understand AI from the perspective of Buddhism and conversely how we understand Buddhism from the perspective of AI. This type of research, bringing centuries old philosophical and theological traditions to bear on modern technological issues, is vital to the development of humanity. In the case of AI, right now there is a great deal of interest and focus directed at enhancing its powers and stabilizing it and aligning it with human objectives, which by and large are focused on engendering a sense of self in the artificial network that underpins AI.

Of course, from the perspective of Buddhist philosophy, we have a sense of self already as human beings. At the same time, Buddhist analysis also says that there isn’t a self which is one and which is permanent and in control; it is an understanding of the world where there are no selves and yet there are appearances of oneself. Complicating matters further, within Buddhism the sense of self, our conception of ourselves as singular, enduring individuals, is not associated with progress or stability. On the contrary, it is the root of a whole lot of problems, if not all of the evil in the world.

These concepts challenge us, as we think about developing artificial intelligence, to consider what a sense of self would mean for such an entity. The Buddhist perspective seeks to develop an ethically rich perspective on the world, and this is a way of practically testing all the things we tend to talk about in regards to artificial intelligence. This perspective — bringing Buddhist thinking to the development of artificial intelligence — could be a way of rethinking models for the evolution and growth of intelligence in a way that is not structured around a single, enduring focal point and tied to the notion of a general artificial intelligence.

But what does all of this mean as a practical matter? Why are these questions important to consider when creating an artificial intelligence?

While these concepts may seem abstract, they cut to the core of how we will relate to super intelligent AIs. For our society to adapt in a healthy way to the emergence of AI, we must understand the moral value or moral status of such an artificial agent. We must grapple with whether it’s morally defensible to develop an agent that can — for instance — feel pain, loneliness or isolation. We must consider what it would mean for an artificial intelligence, a entity which in some cases is already far more intelligent than human beings, to feel pain. The consequences of this could be devastating for human society, both ethically and practically.

Only by seeking understanding of ourselves and AI can we have a meaningful coexistence. As artificial companions increasingly populate our world, how should we relate to them? What role does an artificial caregiver play in our life and how do we interact with them? Now is the time to develop intellectual and emotional understanding of what these beings are in order for us, as human beings, to have a fulfilling life in our world. For humans to flourish now, much less in the future, we need to know deeply who we are dealing with in this emerging field and what values we are bringing into it. These matters may be mundane, such as whether we should laugh if our computers tell us a joke, or serious, as when a computer becomes responsible for the comfort of someone who is dying. How can we relate to these new companions and individuals? And what does it mean for us as human beings?

We are still at the early stages of AI development and this exploration of what it means for humanity and human flourishing. This research will be ongoing for years as we seek to understand how AI will affect us. It is possible, if we are deliberate about how we develop and relate to AI, that we could even become more like some of the collective intelligences we see in the natural world. We are at a special moment in history where we are not simply able but in fact must reconsider some of the fundamental definitions, we have about who we are and what makes us unique and distinct and how we can become better people. The choices we make now will determine whether we are able to use this technology and incorporate it into our lives in a way which makes us better humans and creates a better world.


Thomas Doctor is Associate Professor, Rangjung Yeshe Institute (RYI) and Director, Center for the Study of Apparent Thought, *also known as Center for the Study of Apparent Selves (CSAS).

Brie Linkenhoker, PhD is founder of Worldview Studio, a collective of brain and behavioral scientists, human-centered designers, and multimedia producers who create innovative learning experiences.