Is it sometimes rational to believe things that aren’t true? Evaluating a normative standard for when beliefs should and shouldn’t change
TWCF Number
31517
Project Duration
September 1 / 2023
- August 31 / 2025
Core Funding Area
Big Questions
Region
North America
Amount Awarded
$247,276

* A Grant DOI (digital object identifier) is a unique, open, global, persistent and machine-actionable identifier for a grant.

Director
Daphna Buchsbaum
Institution Brown University

coDirector
William Cunningham
Institution University of Toronto

When we are exposed to contradictory information, biases in information processing shape how that information is understood. These biases can contribute to polarization. This phenomenon is especially apparent for ideological, moral, or core beliefs, which help people structure their broader belief systems and identify their place in a social world, in addition to helping them make predictions and decisions. Many cognitive models assume that a learner wants to be accurate, so that new information will be used to update beliefs rationally. However, there are times when changing one belief can have cascading effects through a belief system that may threaten one’s understanding of the world or of one’s social relationships. In such cases, it may be rational, at least temporarily, to maintain slightly inconsistent local beliefs until a new global understanding can be conceptualized.

A project led by Daphna Buchsbaum at Brown University, co-directed by William Cunningham at the University of Toronto seeks to understand this phenomenon by combining ideas and methods from social psychology and cultural evolution with newer kinds of boundedly rational Bayesian models, known as resource-rational models. Resource-rational models have been successful in reconciling human errors, heuristics and biases in other areas of cognition with the notion that we are fundamentally rational agents, but subject to limits in time, memory, and access to information. 

The aim is to develop a unified computational framework, capturing theoretical accounts of biased belief revision. From this, the computational models that best characterize human belief revision will be identified, ultimately leading to the development of interventions designed to mitigate harmful biases that these processes can sometimes cause. 

By modeling individual and cultural differences, this approach recognizes that different people or groups may reasonably maintain different belief structures and updating strategies.

Disclaimer
Opinions expressed on this page, or any media linked to it, do not necessarily reflect the views of Templeton World Charity Foundation, Inc. Templeton World Charity Foundation, Inc. does not control the content of external links.
Person doing research
Projects &
Resources
Explore the projects we’ve funded. We’ve awarded hundreds of grants to researchers and institutions worldwide.

Projects & Resources