Tag: ai

  • Researcher to Reader; thoughts on the conference

    Below are my thoughts and takeaways from the recent Researcher to Reader conference (that’d be the one in 2026, for future readers of this post).


    This was my first time attending the Researcher to Reader Conference, and overall it was definitely a good conference and one I’d happily return to. The workshops in particular were a welcome break from the traditional conference formats. One of my biggest issues with most conferences is that there are too many of the same passive talks, too little discussion and generally the same people/talking points. The workshops brought much more interactivity and got people talking. It was refreshing to be in sessions that felt participatory rather than performative – although of course it remains to be seen what the outcome from those will be.

    The first day was permeated with discussions on peer review and AI (though I was in an AI workshop much of the day). But, thanks to the opening keynote a third element – trust – permeated too. However, one of the biggest takeaways for me was the tacit acknowledgment of just how bad things are and how current efforts are not best placed to realistically solve these issues.

    I stepped in to moderate a panel on peer review innovations from (the fantastic) Tony Alves which occurred on day 2. There was a panel discussing peer review during the first day that laid out some of the key issues with peer review. However, much of the conversation seemed stuck in well-worn grooves, rehearsing familiar problems without offering genuinely new ways forward. Given how much peer review has been debated, critiqued, and “reimagined” over the past decade, it was disappointing to see how cautious and even outdated the framing still was. This was not helped by one of the panellists not fully addressing many of the questions that got asked. This panel focussed on the issues in peer review – largely that reviewers are increasingly difficult to find, particularly good reviewers. Yet despite the acknowledgement of issues, there seemed a reluctance to try potential solutions.

    Our own panel, by comparison, sparked a more candid discussion of potential solutions and surfaced several important tensions. One recurring theme was just how limited awareness remains among researchers of the various initiatives and efforts currently underway around preprint peer review. We also explored why uptake has been so low, with confusion among authors and a lack of meaningful buy-in or collaboration from publishers emerging as key barriers. There are too many “innovations” and new attempts to “improve” publishing – often without the necessary efforts to raise awareness or buy-in. More broadly, it became clear that stakeholders are still not working together in any coherent way; instead, responsibility is often deflected, with blame passed between groups rather than owned collectively. That said, there was a tentative but notable sense of appetite among a small number of participants to try to build a “coalition of the willing”—although more on that a little later.

    Speaking of researchers: fatigue was a recurring undercurrent throughout the conference. Researchers are tired (for very good reasons) not just of reviewing, but of navigating an ever-expanding landscape of initiatives, frameworks, and acronyms. For example, PRC may make sense internally, but to many researchers this feels like even more noise. This proliferation is actively damaging open science efforts, not supporting them. Did eLife switching to an exclusive focus on preprints really need a whole new effort and movement? The answer is a certain no. The PRC coalition diverts resources, attention and vital funds away from the very effort it requires – preprints. This also highlights a much bigger problem, one especially prominent in the open science movement; the ever growing disconnect from the average researcher and genuine change. This is partly what destroyed the open access movement and preprinting currently sits on a precipice of its own1.

    There’s far too much discussion on what researchers want that is coming from people who are not researchers.

    As with all conferences, the real value came in the discussions and one-to-one conversations. A recurring theme was the widely acknowledged need for a stronger coalition across stakeholders; publishers, funders, infrastructure providers, and researchers themselves. Everyone seems to agree that fragmentation is a problem. And yet, there remains enormous resistance to truly coming together in meaningful ways.

    Part of the issue, from my view, is leadership, or rather, the lack of it. Reform initiatives (and “coalitions”) in this space have a habit of failing because they are managed like short-term projects, led by programme or project managers, when what’s actually needed is long-term, values-driven leadership2.

    Coordination without vision, authenticity and trust doesn’t get us very far.

    Ultimately, a lot of these challenges come back to incentives and agendas. Too many individuals and organisations are still primarily focused on pushing their own priorities, even when they nominally support shared goals like openness and transparency. And those are not just traditional publishers, it also includes many open science efforts. Until we get better at aligning those agendas, or at least acknowledging them honestly, progress will continue to be slower and messier than it needs to be.


    On a more personal note, after the past few months, the conference was a great reinforcer for much of what I’ve been saying and trying to raise awareness of – in spite of the difficulties this has caused me within the preprint space. This also reminded me that I’m very much at the forefront of the key issues – strategically and in thought-leadership.

    This also showed me that maybe I should start looking for roles with publishers – they have some fantastic people working for them already but they’d definitely benefit from my expertise in open science and community/relationship building – just in case any are reading this!


    1 I’m in the process of writing a few op-eds on the topic of open science efforts and how they’re losing trust and potentially undermining the wider movement.

    2 I’m also writing more on this topic

  • AI & its role in peer review

    AI & its role in peer review

    The rise of AI, or LLMs to be specific, has had a significant impact on scientific publishing. From hallucinated references to nonsensical images (that rat penis) and peer reviews devoid of human oversight. More recently, a small number of preprints hosted on arXiv have been discovered to be concealing AI prompts in white text designed to cause chatbots to produce positive reviews. Whilst this undermines the review process1, it also highlights that peer reviewers are using AI tools; whether the research community wants them to or not.

    How can AI be used responsibly in peer review?

    If peer reviewers are going to use AI, then rather than attempting the futile task of preventing this, we should instead promote responsible and appropriate use. So what is appropriate use of AI in peer review?

    Finding reviewers

    AI can aid in finding reviewers and matching them to specific manuscripts. This could improve the efficiency of this aspect to the reviewing process, aid editors workloads and reduce the number of researchers rejecting review invitations. Some preprint review services are already utilising AI in this way and this is likely to expand.

    Improving grammar in peer review reports

    One of the best uses of AI is by non-native English speakers. AI can help reduce the discrimination that such researchers face. This should only be done after the review has been written, by a human, and for grammar only. The review would still need to be checked for clarity afterwards too.

    Automated checks

    A great use for AI is to automate some tasks to reduce the burden on reviewers and editors; for example in detecting plagiarism. Another great use is to use AI to surface any previous (public) reviews. This could help inform the current reviewers or even be used by the editor to inform their decision.

    Summarising human-authored reviews

    Transparent peer reviews, posted alongside articles is an important step in improving trust in the scientific process and in individual articles. However, the majority of these reports go unread and are not being utilised in the best way possible. AI could be used to provide summaries of human-authored peer review reports, thereby providing the important context to readers.

    Flagging preprints that may need peer review?

    A potential new use for such tools would be to flag preprints that may need human scrutiny. This could relieve the stress in the system by avoiding reviewing every output, something that is already unsustainable and failing.

    How else could we responsibly use AI in peer review?

    Important considerations when using AI

    Whenever AI is used, and in whatever capacity, transparency is vital. Use should be declared, including which LLM was used and how exactly it was used. This is important as it provides some degree of confidence that the reviewer or service has used the AI responsibly and checked the results. Indeed, this human-responsibility and oversight is important for all uses of AI; unchecked and unverified content is a large element of AI use that is damaging. Another important consideration when dealing with manuscripts that are not public relates to confidentiality. These manuscripts should not be shared with any third party, which includes uploading them to any LLM model.

    We’re currently creating best practices for AI use by preprint servers, preprint review services and authors of both. Want to collaborate with us on this? Please get in touch!


    1. AI is revealing just how flawed traditional peer review is, supporting the many studies that have investigated issues with peer review.

    Photo by Emiliano Vittoriosi on Unsplash

  • Preprints in training; An AI approach

    Preprints in training; An AI approach

    A recent preprint by Rong Ni & Ludo Waltman (1) surveyed authors on their experiences and attitudes to preprinting. A number of interesting points come out of this data but I wanted to focus on one in particular here; training. Figure 3 in the article looks at the sources of learning about preprints and “in a training event” comes very low (less than 10% of respondents). Now I’ve touched upon my thoughts on the current system of “training” grad students and PhD’s before; it’s outdated and frankly, not up to scratch. I’ve also tried for a few years in the places I’ve worked to replace journal clubs with a more useful and structured training program, that would include various elements, one of which is the publication system, how it works, it’s history, where preprints fit in and then a useful journal club where a preprint is reviewed publicly.

    To that end, I have been part of a team in the past where we developed a training course focussed around preprints and publishing in the life and biomedical sciences (2). In hindsight, there are a few things I think that course is missing but it’s a good set of materials that could be used, by anyone, immediately.

    However I’ve also recently been playing around with ChatGPT and thought it might be interesting to see what kind of preprint focussed course AI would design. I began by simply asking ChatGPT to “design a training course on the use of preprints in bioscience”. The initial “course” was rather simple, but not a bad starting point. I refined things with subsequent, more specific, additions and then I compiled things into a slightly more sensible order. Note that I’ve not edited any of the outputs, simply rearranged the order of the sections.

    What was eventually produced could be refined better and is by no means perfect but it’s incredibly interesting to see just what AI can do, given the right inputs — and by right inputs, I was on a train when doing this so it’s somewhat rushed & messy on the human end (I know, if only we could remove those useless sacks of water)!

    So here we go, a (mostly) ChatGPT designed training course for preprints in the biosciences:

    Course Title: “Using Preprints in Bioscience: A Practical Guide”

    Course Overview: Preprints are an increasingly popular way for researchers to share their work quickly and get feedback from the scientific community. However, many bioscience researchers are not yet familiar with preprints and may be hesitant to use them. This course aims to provide an introduction to preprints, their benefits and drawbacks, and how to use them in bioscience research.

    Course Objectives:

    • Understand what preprints are and how they differ from peer-reviewed publications
    • Learn the benefits and drawbacks of using preprints in bioscience research
    • Understand the impact of preprints on scientific communication and publishing
    • Develop the skills necessary to upload and share preprints effectively
    • Learn how to navigate preprint servers and find relevant preprints
    • Understand the ethical considerations of preprints in bioscience research

    Course Format: The course will be delivered through a combination of lectures, case studies, and hands-on exercises. Participants will have the opportunity to upload and share their own preprints, as well as evaluate preprints from other researchers. The course will be offered in-person or online, with interactive elements such as group discussions and Q&A sessions.

    Target Audience: This course is designed for bioscience researchers at all career stages who want to learn about preprints and how to use them effectively. It is also suitable for librarians, journal editors, and other professionals involved in scientific communication and publishing.

    1. History of Peer Review and Publishing in the Life and Biosciences

    • Overview of the history of scientific publishing and peer review in the life and biosciences
    • Key milestones and turning points in the development of scientific publishing and peer review
    • Understanding how the history of peer review and publishing has shaped the current scientific publishing landscape
    • Introduction to the different models of scientific publishing in the life and biosciences, including traditional peer review, open access, and preprints

    This section will provide a historical perspective on scientific publishing and peer review in the life and biosciences, starting with the early scientific societies and the advent of the scientific journal in the 17th century. Participants will learn about key milestones and turning points in the development of scientific publishing and peer review, including the establishment of the first peer-reviewed scientific journal and the introduction of the double-blind peer review process. The section will also explore how the history of peer review and publishing has shaped the current scientific publishing landscape, and how the traditional model of peer review is being challenged by new models of publishing, such as open access and preprints. Participants will gain a deeper understanding of the different models of scientific publishing in the life and biosciences and their advantages and disadvantages.

    2. Introduction to Preprints, their Benefits and Limitations, Finding and Using Preprints, and Considerations when Reading or Citing Preprints

    • Definition of preprints and how they differ from peer-reviewed publications
    • Benefits of preprints including rapid dissemination, increased visibility, and feedback from the scientific community
    • Limitations of preprints including lack of peer review and potential for misinformation
    • Overview of preprint servers and how to find relevant preprints
    • Best practices for reading and citing preprints including evaluating the quality of the preprint and identifying any limitations or uncertainties
    • Importance of acknowledging preprints in publications and understanding their impact on future publication

    This section will provide a comprehensive introduction to preprints in bioscience. Participants will learn what preprints are and how they differ from peer-reviewed publications. The section will also discuss the benefits of preprints, including their rapid dissemination, increased visibility, and feedback from the scientific community, as well as the limitations of preprints, such as their lack of peer review and potential for misinformation. Participants will be introduced to preprint servers, such as bioRxiv and medRxiv, and learn how to find relevant preprints. Additionally, the section will cover best practices for reading and citing preprints, including evaluating the quality of the preprint and identifying any limitations or uncertainties. Participants will also learn about the importance of acknowledging preprints in publications and understanding their impact on future publication. This section will emphasize the importance of critical evaluation when reading or citing preprints, and provide participants with the tools to make informed decisions about using preprints in their research.Top of FormBottom of Form

    3. Preprint Peer Review and Post-Publication Peer Review

    • Overview of preprint peer review and the different models used by preprint servers
    • Advantages and disadvantages of preprint peer review
    • Introduction to post-publication peer review and its role in scientific publishing
    • Comparison between preprint peer review and post-publication peer review
    • Discussion of the strengths and weaknesses of the post-publication peer review movement
    • Examples of successful post-publication peer review initiatives in bioscience

    This section will provide an in-depth analysis of preprint peer review and the different models used by preprint servers. Participants will learn about the advantages and disadvantages of preprint peer review, as well as the various ways that preprint servers are experimenting with new models of peer review. The section will also introduce the concept of post-publication peer review, which has gained popularity in recent years as a way to address some of the limitations of preprint peer review. Participants will learn about the different models of post-publication peer review and the role it can play in scientific publishing. The section will conclude with a discussion of the strengths and weaknesses of the post-publication peer review movement, as well as examples of successful initiatives in bioscience.

    4. Preprint Review Task and Public Sharing on a Preprint Platform

    • Introduction to preprint review and the concept of a preprint journal club
    • Overview of the task to review a preprint and share that review publicly on a preprint platform, such as bioRxiv
    • Best practices for preprint review, including evaluating the scientific soundness of the preprint and identifying any limitations or uncertainties
    • Discussion of the importance of open science and sharing research findings with the wider scientific community
    • Overview of the preprint sharing process on bioRxiv, including guidelines and recommendations for posting a preprint review
    • Tips for participating in and leading a preprint journal club

    This section will provide participants with an opportunity to engage in the preprint review process and share their review publicly on a preprint platform, such as bioRxiv, in the style of a journal club. Participants will learn about the concept of a preprint journal club and its role in promoting open science and sharing research findings with the wider scientific community. The section will provide an overview of the task to review a preprint and share that review publicly on a preprint platform, and provide best practices for preprint review, including evaluating the scientific soundness of the preprint and identifying any limitations or uncertainties. Participants will also gain an understanding of the importance of open science and sharing research findings with the wider scientific community.

    The section will cover the preprint sharing process on bioRxiv, including guidelines and recommendations for posting a preprint review, and provide tips for participating in and leading a preprint journal club. Participants will be encouraged to engage in constructive discussions and provide thoughtful feedback to their peers, fostering a sense of community and collaboration in the scientific publishing process. This section will provide participants with practical experience in the preprint review process and the opportunity to engage in open science practices that can benefit the wider scientific community.

    There you have it, a 4-week course on preprints that could be easily delivered as part of a structured and common training program to all 1st year grad/PhD students. What do you think, did the AI do a good job or are there glaring omissions?

    I might play around more with this when I have some real time to spend on it properly to expand this out to match my idea of what I think we should have for all student scientists.

    (1) https://dapp.orvium.io/deposits/6442f782b2b5580ba561406b/view

    (2) https://fischertad.github.io/Preprints_and_Publishing_in_Life_Biomedical_Sciences/

    AI is really beginning to take off. Photo by Shahadat Rahman on Unsplash