Encouraging Innovation in Open Scholarship While Fostering Trust: A Responsible Research Assessment Perspective from DORA
Jul 9, 2024

Encouraging Innovation in Open Scholarship While Fostering Trust

A Responsible Research Assessment Perspective

By Zen Faulkes and Haley Hazlett, Declaration on Research Assessment (DORA)

“Responsible research assessment & open scholarship are interconnected.”

Zen Faulkes & Haley Hazlett, DORA

In recent years, the landscape of research assessment has been evolving to better recognize and reward open scholarship and preprints. This shift is driven by funding organizations and academic institutions that seek to enhance transparency, accessibility, and equity in research.

In this piece, Zen Faulkes, Program Director, The Declaration on Research Assessment (DORA) and Haley Hazlett, Program Manager, DORA explore how emerging policies are reshaping research evaluation, the challenges of moving away from traditional metrics, and the vital role of fostering trust in the open scholarship ecosystem.

Emerging policies to better recognize preprints & open scholarship

Research funding organizations play an important role in setting the tone for what is valued in research assessment through the projects they fund and the outputs they assign value to. Similarly, academic institutions signal what they value through how they assess researchers for hiring, promotion, and tenure. An increasing number of research funding organizations and academic institutions have codified open scholarship into their research assessment policies and practices. Examples include Wellcome, the Chan Zuckerberg Initiative, the Ministry of Business, Innovation & Employment of Aotearoa New Zealand, the University of Zurich and the Open University.

This shift is accompanied by policies that recognize preprints as evidence of research activity (e.g. NIHJapan Science and Technology AgencyWellcomeEMBO, and some UKRI Councils). Some funders are now formally recognizing peer-reviewed preprints at the same level of journal articles, such as EMBO and many of the cOAlition S funders. A preprint is a scholarly manuscript that the authors upload to a public server but has not (yet) been accepted by a journal (it is usually the version submitted to a journal if the authors do decide to take it further for journal publication). It can be accessed without charge and, depending on the preprint server, is screened and typically posted within a couple of days, making it available to be read and commented on. Because preprints offer a means outside of journals to share research results, they have the potential to support responsible assessment by decoupling journal prestige from assumptions on the quality of research findings. Preprints also enable sharing of a range of outputs and results that may not be attractive to a journal (for example, research that is technically sound but has a limited scope, or null/negative findings). Because they are also usually free to post, preprints can also help reduce author-facing cost-associated barriers often associated with traditional open access publications, although these services are typically therefore reliant on ongoing grant funding to maintain their sustainability. 

One of the most recent examples of a substantial policy change is the Bill and Melinda Gates Foundation’s March 2024 announcement of its upcoming Open Access Policy in 2025. The 2025 policy will introduce two changes for grantees: grantees will have to share preprints of their research, and the Gates Foundation will stop paying article processing charges (APCs). As with many shifts towards open access policies, these changes were motivated by several factors, including the Gates Foundation’s desire to provide journal-agnostic avenues for research assessment and to empower their grantee authors to share different versions of their work openly on preprint servers and without the costs of APCs. Reducing the costs associated with publishing research and increasing readers’ accessibility to research via preprint servers also supports more equitable access to research products.

The Gates Foundation used ten years of existing data to inform its decision to refine its Open Access Policy, has engaged actively with community dialogue, and has made it clear that this is a first step on a longer path to better evaluate research on its own merits and increase accessibility to research. Notably, the Gates Foundation took this step after taking into account the existing shortcomings of current open access models that rely on APCs that effectively limit global access to research outputs. Given that the Gates Foundation is “the wealthiest major research funder to specifically mandate the use of preprints”, this approach is groundbreaking in its emphasis on preprints and its shift away from spending on APCs. It has also placed a spotlight on these issues and catalyzed discourse around trust in preprints. Policy changes like this indicate a willingness among research funders to take steps toward change and move away from recognizably flawed processes. This is an important step, since flawed processes are often retained because fixing them or adopting new processes is perceived as too high effort and too high risk (also known as the status quo bias).

Overcoming the status quo & tackling new challenges

Overcoming the status quo bias is difficult, but not impossible. Indeed, a common concern around changing research assessment processes to include new ways of sharing knowledge is taking a leap into the unknown. Because these new policies are on the leading edge of change, there are gaps in our knowledge around their effects on research culture and assessment. For example, will assessors penalize researchers who include preprints in their CVs or will research culture shift what it values? 

Another key question centers on how preprints will impact our concept of traditional manuscript peer review processes. Traditionally, journals select a panel of peer reviewers, ideally field experts, who voluntarily review manuscript submissions for free. These detailed reviews inform an editor’s decision on whether to publish a manuscript. Generally, preprints are only lightly checked before being made public, after which anyone can read and comment on them and provide peer feedback. One common concern is that preprints are only subject to light checking before being made public, though it is important to note that issues with rigor and reproducibility exist within current peer-review publication systems. Preprint peer feedback holds the potential for positive change, opening up the opportunity for authors to receive a wide range of community input and making it easier to spot issues early. 

One step to foster trust in preprints will be to create a shared understanding of what preprint “review” is. What qualifies as review in the context of a preprint was recently defined via expert consensus as “A specific type of preprint feedback that has: Discussion of the rigor and validity of the research. Reviewer competing interests declared and/or checked. Reviewer identity disclosed and/or verified, for example, by an editor or service coordinator, or ORCID login.” Additionally, there are a growing number of preprint review services available, for example VeriXiv (created through a partnership between the Gates Foundation and F1000), Peer Community In and Review Commons who have all created infrastructure and pipelines to verify preprints and facilitate structured and invited expert peer review of preprints, post publication. They provide journal-independent assessment of the preprint, typically using various forms of open peer review practices, making it more transparent, fostering accountability and enabling reviewers to be rewarded for their contributions to the field. However, some have raised concerns about whether greater transparency increases risk of retaliation, particularly for early career researchers, although recent evidence suggests that more research is needed to determine if repercussions occur. 

Questions like these are legitimate and highlight the value of the organizations that are actively seeking to answer them, like the Research on Research Institute, which studies the results of research policy reform using the same scholarly rigor that reform efforts are trying to foster in the academic ecosystem. Organizations like ASAPbio are working to address concerns around the agility of preprint servers to correct or retract preprints and to support rigorous and transparent preprint peer review processes.

In the meantime, fear of unintended consequences is not reason enough to avoid trying to improve research incentives and the culture associated with it. The changes that research funders are implementing to recognize and incentivize open scholarship practices are on the leading edge of reform efforts, pushing research culture forward in new ways that aim to address existing burdens caused by APCs and journal prestige. As with all policies that aim to shift assumptions around what can and should be valued in research, gaps in knowledge will need to be filled through iteration, open dialogue with groups that new policies will impact, and careful study of how new policies change research culture.

Responsible research assessment & open scholarship are interconnected

Responsible research assessment: An umbrella term for “approaches to assessment which incentivise, reflect and reward the plural characteristics of high-quality research, in support of diverse and inclusive research cultures.” -RoRI Working Paper No.3

As well as progress in the reform of research assessment, a further fundamental change in the research ecosystem over the past decade has been the emergence of open scholarship (also known as open science or open research)¹. The UNESCO 2021 Recommendation on Open Science outlined a consensus definition of open science that comprises open scientific knowledge (including open access to research publications), open dialogues with other knowledge systems, open engagement of societal actors, and open science infrastructures. It is an inclusive movement to “make multilingual scientific knowledge openly available, accessible and reusable for everyone, to increase scientific collaborations and sharing of information for the benefits of science and society, and to open the processes of scientific knowledge creation, evaluation and communication to societal actors beyond the traditional scientific community.” This definition captures the broad nature of open scholarship: it is both a movement to change how scholarly knowledge is shared, and to address global inequities in scholarly culture itself. 

DORA (Declaration On Research Assessment) is a global non-profit initiative that actively works with the scholarly community to support responsible research assessment for hiring, promotion, tenure, and funding decisions. DORA is part of a global movement that aims to reform research assessment equitably including expanding the definition of what gets assessed, and changing the way the assessment takes place. Reducing emphasis on flawed proxy measures of quality such as the Impact Factor or h-index, broadening the type of work that is rewarded, and challenging assumptions about quality and excellence are critical facets of the movement towards responsible research assessment. However, these core concepts do not exist in a vacuum (see Venn diagram by Hatch, Barbour and Curry).

The concepts of (i) research assessment reform, (ii) open scholarship, and (iii) equality and inclusion cannot be treated separately. They interact strongly and in many complex ways – presented only in broad outline here – and are converging to create a research culture that is centred on people (practitioners and beneficiaries) and on values that embody the highest aspirations of a diverse world.

The concepts of research assessment reform, open scholarship, and equality and inclusion cannot be treated separately.

Responsible research assessment is intricately linked with open scholarship, and also with equity and inclusion initiatives. Biases and assumptions about research quality can determine who is assessed and how they are assessed, and decoupling research products from journal prestige is an important step to address these biases. 

Greater transparency and accessibility enables the recognition of a broader range of scholarly outputs (including datasets, protocols, and software). In alignment with the aims of open scholarship, DORA seeks to address the “publish or perish” culture by recognizing and rewarding transparency, rigor, and reproducibility. Ultimately, enabling and rewarding rigor and transparency serve to foster trust both within academia and with the broader public.

The intersection between these movements is apparent in many policies and practices being adopted at academic institutions around the world. Reformscape, a database of research assessment reform at academic institutions, contains over twenty examples of how institutions are incorporating aspects of open scholarship into their hiring, promotion, and tenure practices. Many of DORA’s in-depth case studies of institutional research assessment reform include mention of the institution codifying open scholarship into their practices.

¹The term “open scholarship” will be used throughout this piece for consistency. Different organizations also use the terms “open research” and “open science” to describe broad policies that encourage openness, though often all three terms generally have a holistic focus on fostering a culture of openness, transparency, and accessibility. DORA uses “open scholarship” to better encapsulate all scholarly disciplines.

Fostering public trust through open scholarship requires a systems approach

A critical part of research culture is trust. There are many ways to build trust. Traditional peer reviewed journals emphasize building trust by invoking expert opinion. Open scholarship emphasizes building trust through transparency: making all stages of the research process, from conception to data collection to analysis and review visible to all. 

The two approaches are not mutually exclusive. Peer reviewed journals have created policies to promote openness, and several forms of open scholarship have sought ways to solicit expert opinions. However, peer review has been used as the main signal of trustworthiness for so long that it can be difficult to convince researchers that having an article pass peer review isn’t necessarily a definitive signal to say that the article is trustworthy. Consequently, many have not been convinced about the value of sharing research ahead of peer review, and have been concerned that removing that vetting would open the gates to flawed research and increase the risk of misinformation.

In practice, a couple of studies (here and here) have suggested that the distinctions between peer reviewed journal articles and preprints can be minimal. Preprints have gained increased acceptance from researchers who are posting more preprints, and from reporters who are writing more stories based on preprints (although more work needs to be done to ensure that disclaimers about the lack of peer review are always added).

Understanding what scholarly communication can be viewed as trustworthy was, is, and always will be a complex task for both experts and non-experts alike. Experts are expected to gain this through advanced training and experience. Non-experts might benefit from increased media literacy, a subject that is taught to less than half of US high school students.  

Call to Action

Reforming research assessment requires new policies and practices that embrace diverse scholarly outputs, reduce the emphasis on journal prestige as an implicit indicator of research quality, and evaluate research based on its intrinsic value. As we expand the type of work that we recognize to include preprints and other “non-traditional” outputs, we can foster trust in these new outputs by 1) recognizing and rewarding transparency, rigor, and high-quality review, and 2) by developing resources to foster and support responsible preprint review. For example, a growing number of bibliographic indexers are starting to index preprints (e.g., Europe PMCPubMed Central) and there are several efforts to index and link reviews to preprints (e.g. ScietyCOAR Notify). There are also a number of efforts underway to develop a range of consistent trust signals and markers. Alongside these efforts lies the crucial task of consistently educating and communicating about innovations in publishing and open scholarship practices to cultivate public trust and literacy.

Change on this scale is not immediate, nor should it be. DORA has long advocated for the strategy of iteratively fine tuning policies over time using data and input from their target communities. As more and more research funders and institutions test new ways of rewarding researchers for their open scholarship practices, it is important to seize the opportunity for careful review, refinement, and fostering an open dialogue on what works and what doesn’t. 

Acknowledgements: The authors would like to thank DORA co-Chairs, Ginny Barbour and Kelly Cobey, and DORA Vice-Chair, Rebecca Lawrence for their editorial input on the piece.


More about this topic