The use of proxies to assess research quality and academics track records is highly problematic. These set a poor, unfair, culture that promotes poor behaviour and erodes trust.
The problem
The use of proxies to research quality and trustworthiness has eroded trust in research. By embracing these in assessment of researchers, academia has encouraged some questionable research practices. Proxies, such as Journal Impact Factors, H-indexes and journal name provide little to no useful information about individual research or researchers. It is vital that we move towards a focus on content and individual output rather than proxies for quality or trustworthiness.
The solution
Thanks to efforts from organisations such as DORA and CoARA the awareness of poor proxies is greater and their use slightly lower than it once was. However, these organisations have not expended enough effort in directly helping organisations adopt their principles and declarations. That’s where we fit in.
We work to design and encourage alternatives to poor proxies, work with institutions to induce new workflows and systems of assessment and raise awareness of the problems with poor proxies.

Our current efforts

Raising awareness of the problems with current poor proxies

Design alternatives to current poor proxies

Create new workflows and policies to assess research and academics
Resources
Frequently Asked Questions
Why is the impact factor (IF) bad?
The impact factor (IF) is often criticised because it measures the average citations of a journal, not the quality or impact of individual articles or researchers. It is biased toward certain fields, can be easily manipulated, and uses a short citation window that favours quickly cited work. Moreover, it only accounts for citations in academic publications. Using IF to assess individual research distorts scientific priorities, encourages unethical practices, and overlooks important but less cited work. For these reasons, IF should not be used for evaluating research quality or academic performance.
Why is the H-index bad?
The H-index is criticised because it favours senior researchers, penalises early-career scientists, and doesn’t account for differences in citation practices across fields. It ignores the number of co-authors and the significance of an individual’s contribution, can be inflated by self-citation, and treats all publication types and citations equally, regardless of quality or originality. It also doesn’t account for citations or impact beyond academic articles. As a result, it is a flawed and incomplete measure of individual research impact, and reliance on it leads to unfair or misleading evaluations.
Why can’t we rely on journal names and brands?
Relying on journal names and brands is problematic because a journal’s reputation or prestige does not guarantee the quality or significance of any specific article it publishes. This focus encourages valuing research based on where it appears rather than on its content and merit, which distorts research assessment and can overlook valuable work in lesser-known journals. The emphasis on brand can also concentrate attention and resources on a small group of exclusive, expensive journals, reinforcing hierarchies and limiting diversity and innovation in scholarly communication. Moreover, even the most highly regarded journals have manipulated their own policies in support of publishing headline-grabbing papers.
How do we assess researchers without these proxies?
To assess research and researchers without relying on proxies like impact factor or journal name, it is recommended to use a mix of qualitative and quantitative methods. These include evaluating the content and significance of research outputs directly, using peer review, and considering broader impacts such as influence on policy, practice, or public understanding. Alternative metrics (altmetrics) can also capture online engagement, media coverage, and citations in non-academic sources, while case studies and evidence-based narratives allow for a more nuanced assessment of real-world impact across different contexts. This holistic approach supports fairer and more meaningful evaluation of research quality and contribution. Additionally, researcher assessment should consider activities beyond publishing, such as teaching, leadership and other “service” activities.

Join Us!
Join our community, sign up to our newsletter or collaborate with us

