False flags of trust

We recently published our first external article in which we proposed a range of trust signals that could replace outdated and poor proxies. You can read that article here. But why do we need to change the current proxies?

False flags of trust

The current markers of trust in a given scientific article are based on proxies that have no bearing on individual articles, on name recognition or on a deeply flawed peer review system.

These “false flags” of trust ultimately lead to a highly unhealthy research culture, where researchers are assessed on things that they have little control over. These false flags also cause distrust in research when they fail to meet the (incorrect) expectations placed upon them.

What are these false flags?

There are a range of poor proxies currently used as signals of “trust” for any given article.

Journal name/brand

Many researchers and journalists still assume that if an article is published in a prestigious or well-known journal, its quality and trustworthiness are guaranteed. However, this overlooks the actual content and rigour of any individual study. Brand recognition can be easily manipulated or may reflect historical prestige rather than current quality. There have also been many cases of “reputable” brands ignoring their own policies to publish headline-grabbing or sensationalist research.

Editor name

The presence or reputation of editors is often implied as a mark of credibility, but few readers have insight into their selection process, expertise, or potential biases. Often, editor names are not even transparently provided, which undermines their usefulness as a trust signal.

Impact factor

This metric has long been criticised as a flawed proxy. The impact factor reflects the average number of citations a journal’s articles receive, not the quality or reliability of any single paper. Journals can also influence impact factors through editorial policies, leading to distortion and unfair emphasis on this number.

Worse still, single articles can have dramatic impacts on the journals impact factor

H-index

Although not a metric for an individual output, the H-index is often used to measure an individual researcher’s impact based on their number of publications and citations. However, this metric can be misleading when applied as a proxy for trust in individual articles. It favours quantity over quality and can be influenced by self-citations or citation circles. It only counts academic citations, ignoring impact from textbooks, policy documents or impact beyond academia. Moreover, it does not account for the context or significance of the citations, making it a poor indicator of the true trustworthiness of specific research outputs.

Peer review

Traditionally seen as the gold standard of providing trust in research, the peer review process is neither standardised nor immune to biases, conflicts of interest, or superficial assessments.

Articles can pass peer review but still contain serious errors or misinterpretations. With the increasing use of AI, this is becoming ever more apparent (rat testicles as a now infamous example). The problem is not with egregiously poor use of AI but with the assumption that peer review should protect the literature. Peer review was never designed to detect fraud or even gross defects.

Emerging peer review reforms seek to improve transparency, but current peer review remains an imperfect proxy for trust. Even within the reform efforts, none of these substantially improve the trust aspect to peer review with most focussing on improving efficiency in the process.

Why change matters

Clinging to these false flags perpetuates a system where researchers are rewarded or penalized based on factors disconnected from the actual quality and reproducibility of their work. This can lead to exaggerated claims, publication bias, and even misconduct, undermining public trust in science.

Moving beyond these proxies opens the door for a research culture that values openness, rigour, and accountability. This would better serve the scientific community and society by highlighting the true merits of research and fostering innovation grounded in trustworthiness.

What could replace these proxies?

In our article, we proposed several alternative trust signals designed to focus on the article itself rather than external, often irrelevant indicators. You can read more in the article and on our website as this is one of our dedicated projects.

Adopting such signals can help shift the focus from where research is published to what the research actually achieves — increasing reproducibility and ultimately trust in scientific outputs.

We’re currently putting together a proposal “Beyond peer review” for funding for an in-person meeting to further discussion of trust indicators and a vision of a more robust system of determining an articles reliability.