Tag: science

  • Researcher to Reader; thoughts on the conference

    Below are my thoughts and takeaways from the recent Researcher to Reader conference (that’d be the one in 2026, for future readers of this post).


    This was my first time attending the Researcher to Reader Conference, and overall it was definitely a good conference and one I’d happily return to. The workshops in particular were a welcome break from the traditional conference formats. One of my biggest issues with most conferences is that there are too many of the same passive talks, too little discussion and generally the same people/talking points. The workshops brought much more interactivity and got people talking. It was refreshing to be in sessions that felt participatory rather than performative – although of course it remains to be seen what the outcome from those will be.

    The first day was permeated with discussions on peer review and AI (though I was in an AI workshop much of the day). But, thanks to the opening keynote a third element – trust – permeated too. However, one of the biggest takeaways for me was the tacit acknowledgment of just how bad things are and how current efforts are not best placed to realistically solve these issues.

    I stepped in to moderate a panel on peer review innovations from (the fantastic) Tony Alves which occurred on day 2. There was a panel discussing peer review during the first day that laid out some of the key issues with peer review. However, much of the conversation seemed stuck in well-worn grooves, rehearsing familiar problems without offering genuinely new ways forward. Given how much peer review has been debated, critiqued, and “reimagined” over the past decade, it was disappointing to see how cautious and even outdated the framing still was. This was not helped by one of the panellists not fully addressing many of the questions that got asked. This panel focussed on the issues in peer review – largely that reviewers are increasingly difficult to find, particularly good reviewers. Yet despite the acknowledgement of issues, there seemed a reluctance to try potential solutions.

    Our own panel, by comparison, sparked a more candid discussion of potential solutions and surfaced several important tensions. One recurring theme was just how limited awareness remains among researchers of the various initiatives and efforts currently underway around preprint peer review. We also explored why uptake has been so low, with confusion among authors and a lack of meaningful buy-in or collaboration from publishers emerging as key barriers. There are too many “innovations” and new attempts to “improve” publishing – often without the necessary efforts to raise awareness or buy-in. More broadly, it became clear that stakeholders are still not working together in any coherent way; instead, responsibility is often deflected, with blame passed between groups rather than owned collectively. That said, there was a tentative but notable sense of appetite among a small number of participants to try to build a “coalition of the willing”—although more on that a little later.

    Speaking of researchers: fatigue was a recurring undercurrent throughout the conference. Researchers are tired (for very good reasons) not just of reviewing, but of navigating an ever-expanding landscape of initiatives, frameworks, and acronyms. For example, PRC may make sense internally, but to many researchers this feels like even more noise. This proliferation is actively damaging open science efforts, not supporting them. Did eLife switching to an exclusive focus on preprints really need a whole new effort and movement? The answer is a certain no. The PRC coalition diverts resources, attention and vital funds away from the very effort it requires – preprints. This also highlights a much bigger problem, one especially prominent in the open science movement; the ever growing disconnect from the average researcher and genuine change. This is partly what destroyed the open access movement and preprinting currently sits on a precipice of its own1.

    There’s far too much discussion on what researchers want that is coming from people who are not researchers.

    As with all conferences, the real value came in the discussions and one-to-one conversations. A recurring theme was the widely acknowledged need for a stronger coalition across stakeholders; publishers, funders, infrastructure providers, and researchers themselves. Everyone seems to agree that fragmentation is a problem. And yet, there remains enormous resistance to truly coming together in meaningful ways.

    Part of the issue, from my view, is leadership, or rather, the lack of it. Reform initiatives (and “coalitions”) in this space have a habit of failing because they are managed like short-term projects, led by programme or project managers, when what’s actually needed is long-term, values-driven leadership2.

    Coordination without vision, authenticity and trust doesn’t get us very far.

    Ultimately, a lot of these challenges come back to incentives and agendas. Too many individuals and organisations are still primarily focused on pushing their own priorities, even when they nominally support shared goals like openness and transparency. And those are not just traditional publishers, it also includes many open science efforts. Until we get better at aligning those agendas, or at least acknowledging them honestly, progress will continue to be slower and messier than it needs to be.


    On a more personal note, after the past few months, the conference was a great reinforcer for much of what I’ve been saying and trying to raise awareness of – in spite of the difficulties this has caused me within the preprint space. This also reminded me that I’m very much at the forefront of the key issues – strategically and in thought-leadership.

    This also showed me that maybe I should start looking for roles with publishers – they have some fantastic people working for them already but they’d definitely benefit from my expertise in open science and community/relationship building – just in case any are reading this!


    1 I’m in the process of writing a few op-eds on the topic of open science efforts and how they’re losing trust and potentially undermining the wider movement.

    2 I’m also writing more on this topic

  • How to get more involved with preprints and open science

    How to get more involved with preprints and open science

    Preprints are revolutionising the way we share and communicate scientific findings. They have numerous benefits and advantages for all stakeholders but particularly for ECRs. If you are an ECR you need to be posting preprints. If you train or are responsible for ECRs then you need to be making sure you facilitate preprinting of their work.

    But how can you get more involved in this fast moving world?

    Use preprints

    OK I’ve just mentioned this one but not only should you be posting preprints yourself but you should be reading and citing other preprints in your field. This will keep you 1–2 years ahead of those who only read published papers. When you do publish, choose open access journals and those that are more friendly to changing the broken system.

    Host/take part in preprint journal clubs

    Journal clubs can be useful and are often a staple in “training” for ECRs within a bioscience department. Stop picking CNS papers because they’re flashy and start using preprints to be at the true cutting edge of your field. To make them even more useful you should spend a little extra effort on writing up the discussion as a comment for the authors. This way your journal club is helping to advance preprint use and also advance science by helping authors refine and improve their work.

    Share data and methods openly

    Tied into using preprint but if you have a dataset or useful methods, upload these to repositories when you post the preprint. Sharing code openly can even lead to new collaborations and significantly improve your own work — we found this with our COVID papers where sharing openly led to a collaboration for the first paper and then posting a preprint led to our second paper being co-published, again making the conclusions much stronger.

    Educate yourself (and others)

    It’s so surprising how many academics (including “esteemed” professors) who just don’t understand the history of our publishing system or where peer review comes from. This is vital in understanding the problems within the system and why it needs to change. There’s a lot of survivorship bias in academia and looking back can help us move forwards.

    Follow open science leaders

    Some of the brilliant people who are leading the change towards open science and preprint use are very active on social media. On BlueSky, you can follow the Preprints and Metascience feeds.

    Get involved with communities

    This is perhaps one of the best ways of getting more involved in preprints and open science.

    PREreview —  community and training focussed on increasing equity in preprint peer review. Recommended platform for uploading community or journal club reviews of preprints.

    preLights — preprint highlighting that allows you to write about interesting preprints and collaborate with others in the community. An excellent initial (active) step into the world of preprints.

    Preprints in Motion — podcast focussed specifically on highlighting preprints and the ECRs behind them in addition to discussing the wider issues in academia. Contact preprintsinmotion@gmail.com

    Talk to co-workers about preprints

    Now you’re using preprints, you’re writing about them or have been involved in the fellowship programs above. Get out there and tell everyone why they should be preprinting and making science a better place for all! Spread the gospel!

    If you’ve posted an interesting preprint or read one recently you can also highlight it to Preprints in Motion for a full podcast episode focussed on the preprint and ECR.

    Attend open science events

    There are many open science events you could attend such as conferences and workshops from FORCE11 and various universities (e.g. Sheffield University’s OpenFest).

    Write about preprints

    This may be through preLights but can also be more casual or opinion pieces in science magazines and journals. I’d strongly recommend preLights because not only is it a great community but it helps establish your own name in the preprint sphere.

    Start your own initiative

    We’re always happy to discuss ideas and provide support in some exciting new initiative led by you!

  • In defence of preprints

    In defence of preprints

    You may have come across this remarkably ill-informed blog post. Given that this podcast advocates for preprints in the life sciences, we are compelled to issue a direct response to the blog post. Leadership does not come from remaining silent.

    You can listen to the episode discussing the blog post here, below is a clearer, evidence backed, rebuttal.

    Preprints serve scientists and the pro-science movements

    The click-bait title of the blog post is designed to be provocative. However, it also immediately highlights the authors’ lack of understanding of preprints. Preprints are most beneficial to early career researchers who benefit from being able to document and evidence their work when they are ready for it to be shared rather than waiting 6-12 months in an opaque peer review process. During public health emergencies, preprints quite literally save lives, with around 40% of the early research being shared first as a preprint; this occurred during the COVID-19 pandemic and recent mpox outbreak. This is far from serving an “anti-science agenda” as claimed by the blog post author.

    The benefits of preprints are well known by now so I won’t retread that ground here. Instead, I wish to focus on the blog post and it’s multitude of problems. Before I do however, it is worth highlighting that the author left academia to create a new journal (stacks journal) and has a huge bias in favour of traditional peer-review. Surprisingly for someone involved in the science communication space, the author comes across highly uninformed on open science and the history of publishing/peer review. This is, perhaps, due to the manner in which the blog post was written, rather than a genuine lack of understanding but is concerning regardless. The blog post fails to provide any argument for that preprints are “serv[ing] the anti-science agenda” and instead is a few misguided and incorrect opinions. There are accusations and inferences that are made without any evidence or support.

    The blog post presents a highly out-dated view that looks backwards rather than to the current moment, never mind the future.

    Anyone can post anything as a preprint

    The author claims that “nearly anyone can post an official-looking preprint” and that “preprints are like blog posts”. Most reputable preprint servers have basic screening processes in place. This reduces the sharing of pseudoscience, poor quality studies and other non-science outputs. This also identifies controversial work that may require a closer examination from a process such as peer review. Preprints are most often (although not always, depending on the server) full research articles with materials, methods and results. One point worth highlighting is that authors are often still overly cautious and regularly fail to include datasets or supplementary files, something I hope will change sooner rather than later. However, these screening processes act to prevent “anyone posting anything” onto preprint servers.

    This may be a good point to define “preprint” as the blog post author does not seem to understand what preprints are. A preprint is a manuscript shared online by the authors prior to journal-organised peer review. They are citable, have a DOI, are permanent records, may be peer reviewed and are shared when authors believe that the work is ready for public consumption. They are very, very, rarely preliminary work with the majority of preprints being complete scientific articles.

    The editorial standards of the blog post that this author has written fail even the most basic standards for ethics and integrity in scientific writing. If this is the standard for a blog post then the author makes his own point; preprints really are nothing like blog posts (they’re vastly superior).

    Preprints are comparable to the published literature

    A central tenant to the authors argument is that preprints are poor quality. This is a persistent myth and a common argument from people who do not understand preprints. However, this is just a myth and there is a growing body of evidence addressing this point. Indeed, I have authored such work myself – work that other Scholarly Kitchen (SK) associates found acceptable to publicly attack and even go so far as to email the editor and journalists telling them it was fatally flawed rubbish (it actually stood up to intense scrutiny but this highlights some personal bias I have against SK and reveals how some of those involved behave in relation to evidence they don’t like)!

    This growing body of evidence, from multiple, independent groups using a variety of different methods and approaches consistently draws the same conclusion; that preprints are comparable to the peer-review literature. When preprints do undergo changes, these are most often limited. In simple terms, peer review does indeed help to make an article better, but only in a very limited manner. Additionally, that improvement is around 5% “better”. For a system that can cost $4 billion globally and delay scientific progress by an average of 6 months, 5% may not be worthwhile.

    preprints are comparable to the peer-reviewed literature

    Given this, it would be egregiously anti-scientific to suggest that all, or even most, preprints are low quality or nothing more than glorified “blog posts…[on] social media platforms”. One valid question does remain; what about the 30% of preprints that are not eventually published? I have an active project on-going to address this question. However, any preprint expert will likely answer this in the same manner for now. There are many reasons why a preprint may not be published. This could be that the preprint is negative data or small, one-off datasets that are difficult to publish and often not worthwhile for authors, despite their scientific value. Preprints may be the final intended destination of the work (we interviewed one such author previously) or the authors may not be able to afford APCs at journals. None of those reasons are that the preprints are low quality or unreliable anti-science. I will agree that there are low-quality, unreliable preprints, such as this one (which was withdrawn within 48 hours – quicker than any journal would act). However, this equally applies to peer-reviewed literature, which is in many ways considerably more dangerous.

    The author also suggests that reporting on preprints is “alarming”. However, data shows that journalists have adapted well to preprints and generally report on preprints in an ethical, appropriate, manner and with greater standards than the Scholarly Kitchen possesses.

    Peer review does not protect the scientific literature nor is it designed to

    “For decades, scientists have relied on peer review to ensure scientific knowledge is built on a foundation of rigor and credibility. However, preprints are adding to the crumbling of that foundation”. The very first line of the blog posts asserts that peer review, and peer review alone, is providing the foundation of rigour and credibility to science. I’m not too sure what the author thinks of all those scientific breakthroughs that occurred prior to the 1970’s, before peer review was standard, but one can infer from this blog post that we should maybe not trust papers such as the one describing the DNA double helix from Watson & Crick or much of Einsteins work (not peer-reviewed, much to the surprise of many who haven’t studied the history of peer review). Peer review has only been commonplace since the 1970s. However, more fundamentally, this highlights a common misunderstanding of peer review; that it is meant to protect the literature in some way by acting as a quality control (QC) step.

    One common problem that is fuelling the anti-science agenda is the veneer of trust that peer-review (falsely) offers.

    Peer review is not designed to detect fraud, a fundamental element for protecting the literature as part of a QC. Indeed, there have been some very high profile examples of fraud in some of the biggest and most “impactful” journals, including the Surgisphere scandal. This work was published, not because peer review failed, but because peer review was never designed to detect fraud to begin with. This is further illustrated through the work of expert sleuths (forensic scientometrics) who regularly detect manipulated data, spliced gels and other abnormalities that indicate a particular result is less than trustworthy.

    Peer review is also consistently poor at identifying gross defects in papers, as evidenced by numerous studies. This is probably not surprising given the almost complete lack of training in assessing other people’s work or in identifying defects. Add the fact that reviewers are overburdened, with less time to dedicate to review activities and the likelihood of even a good, detailed reviewer missing gross defects increases. Peer reviewers are not forensically examining manuscripts, paid or not. Some evidence suggests that paying peer reviewers slightly increases turnaround times but there is no evidence that paid reviewers spend longer assessing the manuscript or put more effort into the process. Indeed, the evidence that does exist is that the paid-for reviews are not significantly different to the non-paid for reviews.

    Together, these effectively negate the protective aspect of quality control that many people falsely believe that peer review is designed to achieve. This does not render peer review pointless or something to avoid entirely. Peer review does help authors improve their work (as discussed), but collectively we need to better explain the limitations of peer review and rely less on this as the lone bastion of rigour in science.

    One common problem that is fuelling the anti-science agenda is the veneer of trust that peer-review (falsely) offers. The author makes no real effort to illuminate this point. It is due to this mis-use and mis-understanding of peer review that there does need to be a much greatly improved system of trust for research.

    A constellation of trust signals

    The authors’ suggestion that trust should come from a deeply flawed peer review process is short-sighted, failing to account for the already changing landscape. What is needed are a range of trust signals that, when taken together can provide a sense of trust in an individual research output. Peer review may very well be one of those signals but it should not be the only one. Ideally, these trust signals would be both static (reflecting fundamental qualities of the work) and dynamic (updating as new evaluations emerge). They would be applicable to a broad range of outputs and extendable to researcher assessment. A system of diverse and dynamic trust signals that accompany preprints, would offer a multifaceted evaluation framework capable of evolving over time. In the 21st century, science publishing can no longer communicate findings solely to other scientists. Accordingly, the metrics associated with publishing must reflect reality, whilst encouraging best practices and rigorous science.

    Preprint feedback occurs privately

    One piece of evidence the blog post does cite is a study that suggests only 7% of bioRxiv and medRxiv preprints receive comments. As the author is cherry-picking their citations and evidence, what they fail to communicate in the blog post is that most feedback occurs privately. More recent data suggests that 60% of surveyed authors received feedback on their preprint, rising to 70% of authors who actively sought feedback. I would much prefer public feedback, but that’s not the issue in the blog post. The blog also fails to account for any form of non-traditional feedback, such as would occur through social media channels or preprint review services such as PREreview or preLights. It’s a confusing decision to focus on preprint commenting, a feature well known to be underutilised. On bioRxiv/medRxiv, there is a panel that brings together reviews and context for each preprint (shown below). This panel is also a great example of how trust signals could be surfaced and presented to readers and also brings together wider, non-traditional, mechanisms of feedback.

    The bioRxiv/medRxiv Reviews and Context panel present alongside all posted preprints

    Scientific publishing is not sustainable in its current form. The system requires significant changes that must occur alongside changes to research(er) assessment, hiring and promotion practices and broader science communication. Academia itself also needs to change, moving away from narrow training of new scientists whilst offering no academic positions. The entire system is breaking and what is very much not needed are more poorly informed efforts that distract from the immense amount of work needed to create a better future for academia and for science.

    Disclaimer

    I have chosen to match the tone of the original blog post to provide the most appropriate response I can rather than a dry, evidence-only, article. I am not attacking the original author but when somebody creates something they claim to be built on ethics and integrity, it is reasonable to highlight the hypocrisy on display. Otherwise, I have simply used the authors own words to frame my rebuttal. It is difficult to combat a blog post with such little scientific standards without my response positively dripping in sarcasm. Whilst I would hope the original author would reflect, I have not seen any evidence to this effect and it is most likely a lost cause. Indeed, this blog post has raised significant concerns around the authors Stacks Journal and the potential lack of editorial and scientific standards employed. Hopefully, the blog post has not caused any real damage and, based on some of the social media responses, it seems clear that people see the author as biased and poorly-informed. A similar article that attempts to initiate a similar discussion around peer review can be found here – this is a much more informed article for those interested.

    Photo by J A Coates, “Sintra”