J 20 Nathan20 Matias20 20 CAT20 Lab20 20 Human Algorithm20 Behavior
Development
Nov 27, 2023

How Citizen Science is Helping to Improve Human-Algorithm Behavior

In this Q+A, computer and social scientist, J. Nathan Matias, discusses work designed to advance the practice of flourishing in online communities.

By Templeton Staff with Dr. J. Nathan Matias

Dr. J. Nathan Matias, founding director of the Citizens and Technology Lab (CAT) at Cornell University has recently been awarded a Mozilla Rise25 Award in the Advocates category. The Rise25 Awards recognize artists, activists, creators, builders, and advocates who are shaping the future of the Internet to be more ethical, responsible, and inclusive, ensuring a positive future for all. Mozilla honors awardees in the Advocates category for shaping the regulations and policies governing the internet, fighting to keep it open and free.

"Visions of a flourishing Internet can’t be achieved if we are stuck playing defense," Matias says, reflecting upon the community of academic and citizen scientists bringing awareness to these issues. "The Rise25 Award isn’t just for me and CAT Lab — it’s for anyone who has published an investigation, started a lab, established a funding stream, and collected evidence in the public interest."

We caught up with Dr. Matias to learn about the results from his project supported by Templeton World Charity Foundation (TWCF), and to find out a little more about his work championing independent technology research. Read the Q+A session below.


Q+A with Dr. J. Nathan Matias

In your recent paper, you note that today’s communications technologies have broadened access to so much information that society now must rely on automated systems to filter, rank, suggest, and inform human thoughts and actions. What are the biggest benefits and drawbacks of this?

J. Nathan Matias: As individuals and as societies, we tend to want access to more opportunity than we are able to actually handle. I love being able to connect with people and ideas around the world through the Internet. Yet since we only have so many hours each day, we turn to schools, newspapers, churches, and now algorithms to suggest what we should focus on. Because the power to filter what we see holds a significant influence over human behavior in aggregate, it’s a valuable service, an important stewardship, a bitter battleground, and very big business.

But there’s a very big problem. As I wrote in Nature this summer, technologists can’t currently provide any verifiable assurances about the values these algorithms will ultimately steer humanity toward. Will they encourage more or less good in the world, increasing hatred or understanding or generosity? Leading computer scientists and social scientists say that’s currently impossible to answer, and tech leaders agree, despite continuing to put these systems into the world.

In my research, I’m trying to make adaptive algorithms reliably serve the common good by working alongside affected communities to improve the science of human-algorithm behavior.

 

Technologists can’t currently provide assurances about the values algorithms will ultimately steer humanity toward.

 

Using a large-scale field experiment, your team at CAT Lab set out to test if influencing readers to fact-check unreliable sources would cause news aggregation algorithms to promote or lessen the visibility of those sources. Please describe how this field experiment worked.

J. Nathan Matias: Our research always starts with a question from communities who want to understand their digital environment. This study began with the 70 volunteer moderators of r/worldnews on Reddit, a community that, at the time, had 13 million subscribers who shared and discussed news from places outside of the US.

Over a dozen times a day, the community would receive posts of prejudiced and unreliable stories that often spread fear and hatred, stories that Reddit’s algorithms would amplify to even more people as readers clicked and commented on them. Community leaders wanted to encourage their community to think critically and fact-check the articles, but they were worried it might backfire due to Reddit’s algorithms.

The community’s leaders had good reason for concern. A few years earlier in the lead-up to the ethnic cleansing and genocide of Rohingya in Burman/Myanmar, a peace-focused group organized the “Panzagar” campaign to encourage understanding rather than hate. According to organizers, Facebook’s ranking algorithms interpreted encouragements toward peace as signs that hatred was popular — and sent hateful messages to even more people. Stories about the failure of Panzagar hadn’t yet reached us, but leaders of r/worldnews were worried that the same thing could happen on Reddit. For more context on Panzagar, see Erin Kissane's piece and related series here.

When your community has an idea for good that you’re not sure will work — that’s a perfect moment for the kind of community science that we do at CAT Lab. Using our software for community experiments, we organized a randomized trial to study the effects of encouraging fact-checking. Working with community leaders, we downloaded data about thousands of past conversations, figured out how to monitor Reddit’s algorithms, and then conducted an experiment, with community consent.

In the study:

  • We detected when articles from unreliable sources were posted
  • Randomly assigned encouragements to fact-check (or not)
  • And monitored two things: how people responded, and how Reddit’s popularity algorithms responded to what people did in turn.

 

What were the results? Do the findings have implications for the common good?

J. Nathan Matias: Across 1,104 discussions, we found that encouraging fact-checking did not backfire. On average, a few people would look for other sources and post more information early in the discussion. As other readers saw these fact-checks, the articles got fewer up-votes and were promoted less by Reddit’s algorithms — enough to take them off the site’s front page on average.

There’s also a bigger picture — this study shows how a science of human-algorithm behavior is possible and points to one of many methods for achieving it. If people have the capacity to monitor and evaluate the impact of algorithms in our lives, we can make better decisions about the values we want to see them foster in our communities. By creating and sharing open knowledge with what we find, we may also be able to develop more general approaches to steer algorithms for the common good.

 

CAT Lab empowers lived-experience experts to collect data and test ideas to improve their communities.

 

Please tell us more about CAT Lab’s work with citizen behavioral science for the Internet. How does it help advance the practice of flourishing in online communities?

J. Nathan Matias: At CAT Lab, we are pioneering citizen/community science for our digital lives. Our work starts with the realization that wherever communities and organizations are working well for people, it’s thanks to people who invest their time and care. That’s especially true online, where communities like r/worldnews, Wikipedia, and more local groups depend on millions of hours of volunteer effort from people who care. As a lab, we empower those lived-experience experts to collect data and test ideas to improve their communities.

Since my first community science project as a grad student in 2015, CAT Lab has worked with people across many languages and continents to test the effects of mass gratitude campaigns on Wikipedia, tested ways to prevent online harassmentinvestigated phone addiction concerns, organized crowd-sourced audits of censorship software, and many more community-driven questions. We have also developed award-winning open science software systems to make this kind of work more possible and more ethical.

Building a citizen/community science movement takes a lot of time and community organizing — something we’re grateful to have support from TWCF to undertake. CAT Lab does this through Community Research Summits, where we convene community leaders and scientists to share their hopes and discuss possible research studies. As a junior professor and advisor to PhD students now, I’m also learning more about the structures within academia that can enlist more people into this kind of work — in a way that foregrounds scholars from diverse cultures and regions.

 

What future research directions do you imagine may emerge from your work in the field of online dialogue and community behavior?

J. Nathan Matias: As a computer and social scientist who organizes citizen/community science, I want to live in a world where people can study and improve our digital environments as easily as we investigate, protect, and improve our natural environments. Getting there involves community partnerships on new research and — increasingly — working to protect the right to research.

Communities are especially concerned about the intersection of AI with longer-standing questions about what it takes to sustain collective efforts for the common good. Because AI has already been affecting them for years, online communities are well-positioned to imagine and develop science the rest of us can learn from. For example, we are currently working with communities to try to forecast cascades of conflict online by monitoring algorithms that amplify conflict — and test interventions to encourage different conversations.

Learning from communities is also important when you can’t anticipate what will happen next. A few years ago, when we started studying volunteer burnout, we didn’t anticipate that generative AI would dramatically increase the burden of effort for leaders seeking positive responses to misinformation, conflict, and prejudice. We’re now working with community leaders to imagine new research that can help them navigate the challenges of generative AI and other adaptive algorithms.

In the last year, researchers studying technology and society have faced unprecedented attacks and legal threats from tech companies and governments alike. Anticipating this threat to science with a group of other concerned researchers, I spent the last two years co-founding the Coalition for Independent Technology Research. This nonprofit that works to advance, defend, and sustain the right to study the impact of technology on society, whether you’re a community leader, journalist, NGO, or academic. I’ve had to reluctantly accept that the more this research makes a difference, the more it will be under threat, since there are people in this world who benefit from crises in virtue, character, and human understanding. I never expected the science of human decency would need defending, but here we are, and I’m grateful that hundreds of leading scientists and dozens of organizations have joined together to protect and support the collective search for beneficial knowledge about our digital worlds.

 


Dr. J. Nathan Matias is a computer scientist and social scientist who organizes citizen behavioral science for a safer, fairer, more understanding internet. A Guatemalan-American, Matias founded and leads the Citizens and Technology Lab (CAT) at Cornell University, where he is an assistant professor in the Department of Communication and field member in Information Science. Matias is also co-founder of the Coalition for Independent Technology Research, a nonprofit that works to advance and defend the right to ethically study the impact of technology on society.