Blog

  • A reflection on the CZI Open Science meeting 

    A reflection on the CZI Open Science meeting 

    It may have been the final CZI Open Science meeting, but it also featured the launch of openRxiv! And that is a reason for great excitement around the future of preprints.

    To start though I must say a huge thank you and congratulations to the CZI team. These are always excellent meetings but this one really stood out for me. Between the fantastic mix of people and openness in discussions this year felt particularly productive. The resort in Carlsbad was beautiful and the team did such a great job of providing lots of space for discussions and networking. There were a few takeaways from the meeting and below I’ll cover what I found particularly interesting.

    AI

    Definitely not a surprise. Just about every meeting or conference I’ve been to over the past two years has had a big dose of AI related topics and discussions. CZI is pivoting towards a focus on 4 grand challenges, something that is very exciting for the wider space and has a big AI component. The virtual cell has enormous potential and the focus on immunology and imaging (my background) is very exciting. To save those of you reading this from AI-overload, I won’t say much more. But there were some interesting things on show, perhaps the takeaway for me was the new platform qed – designed to help manuscript authors identify any gaps in their claims, it is definitely one to watch. I’ve had a little play around and it does seem to work really quite well and is one of my follow up discussions. 

    OA concerns

    This was the final CZI OS meeting and that formed a large number of discussions that occurred. There is now some uncertainty around the future of open science and concerns over who (if anyone) might step in to fill this funding void. There was also concern that some efforts in open science may be pushing too far and have the effect of leading to more closed access. 

    One very interesting conversation was around the goal of “open”. Myself and another both felt like pursuing open as a goal has been a mistake and led to “open for the sake of open”. This is also seen in some of the conversations recently around how preprints should be used – see the recent journalology newsletter for a poor take. The goal should be rigorous, good, science. Openness and transparency have a key part in this but is not the only component. As such, when the goal becomes simply to make research open, an awful lot is lost. It has also, in part, led to an increasing disillusionment in open science. This really highlights the importance of ensuring that the end goal is the right one and taking care with the words used. I believe that “open” became the goal due to ease in measuring this, partially at least. Rigorous science is harder to measure and assess, but we should not pursue things just because they are easy. 

    Culture

    You will know by now that at Rippling Ideas, culture is a foundational focus. I’ve long said that trying to only change publishing behaviour or academic assessment independently isn’t the way forward. It was very good to see just how much this was acknowledged in the many discussions I had with people. There were even breakout sessions specifically focussed on culture change. 

    One discussion I would like to elaborate on was in relation to the role of different stakeholders. There were questions that openRxiv should mandate CC-BY licences for bioRxiv and medRxiv (watch here to learn more about licences). This discussion included comments on the funders having mandated preprints and therefore it was “right” that openRxiv mandated certain things too. I disagreed on this as I believe that different stakeholders very much have different roles to play. Funders can and should mandate, whereas preprint servers should provide authors with choice and training/awareness. Indeed, a coalition of the various stakeholders working in unison to coordinate their different roles could have a huge impact compared to the current approaches. 

    Mandates result in compliance, training and outreach result in lasting cultural change

    There are still many barriers to the wide adoption of the practices that the participants of this meeting would like to see. Given the significant overlap in interests, it does feel like more progress could be made in this space.   

    The future is openRxiv

    Following the CZI Open Science meeting, there was a day event specifically for openRxiv. Perhaps unsurprisingly, this was my favourite part of the week. Throughout, there was a lot of excitement about the potential of openRxiv and getting to hear about the approach and plans only furthered this. From my perspective, the team is taking an exceptionally well thought out and intentional approach to building this new organisation. The talk of embedding values – not just words – was particularly welcome and a very positive hint at the kind of organisation that openRxiv will be. For us, values (and adhering to them) are the key to ensuring that you make the right decisions. 

    I won’t share what was discussed as to not give anything away but one of the comments made by an attendee is definitely worth highlighting. When discussing bioRxiv and medRxiv, a number of people highlighted that these servers were very well timed and that the pacing of the efforts from the (now) openRxiv team have been key to making progress whilst not moving too quickly and losing people. Indeed, this pacing is an often underdiscussed component to the success of preprinting in the life sciences. It definitely feels like now is the ideal time for an entity like openRxiv which can best fill the gap in preprint advocacy that currently exists since the loss of other efforts (us aside, but we’re still unfunded).

    A highlight was Richard Severs talk on the article of the future where he described preprints as a node within a constellation of linked objects that include narrative, data deposited in repositories, archived code, verification badges, and any necessary protocols or preregistered research plans, amid a cloud of review and evaluation elements.

    It’s also worth highlighting just how good an investment bioRxiv has been – particularly for CZI who are one of the biggest donors. This is very possibly the best investment in science this century. Preprints led to the saving of millions of lives during the COVID pandemic, have sped up the process of discovery and disrupted academia in a highly positive manner. And these changes are still only just getting underway. To me, preprints represent a turning point for academia; a chance to move towards a healthier culture, more trusted work and greater accessibility. 

    This is very possibly the best investment in science this century

    This is not to say that the path forward is going to be easy. We’ve witnessed the withdrawal of significant funding from this space, the removal of key voices and the hijacking of once great efforts by people who place themselves above the goals we should be striving towards. There’s also a growing crossroads in preprinting, with one group seemingly wanting to simply replicate the system we have, with minor improvements, and another wanting real change. This is all occurring in the background of the traditional publishers who have their own preprint servers in an attempt to control and limit this movement. However, if any team can achieve this then it is the openRxiv folks. The fact that they can now expand and undertake more efforts in advocacy and play a central role in coalition building provides a strong advantage to the “right” direction forwards.

    Trust signals

    Throughout both meetings, trust signals permeated. It was clear that there is now a desire to move away from peer review as the lone quality indicator and metrics such as impact factor or h-index. This is something that both openRxiv and we at Rippling Ideas strongly support – indeed we’ve both separately written about this. 

    What I particularly liked about this component was that openRxiv is perfectly positioned to encourage and widen this approach which might just be sufficient to combat some of the (many) problems I see with the PRC efforts. It also highlights that the PRC movement is outdated and not a meaningful change.

    There are an ever growing number of efforts to widen these signals of trust and the next 5+ years is likely to see a whole range come and go. There is a definite need for some standardisation across these efforts and to tie them in with culture change and advocacy – this is how we give the best indicators the best chance of sticking. Indeed, this is something that we hope to focus on in the coming years. 

    China?

    One key discussion that I felt was generally missing from discussions was that around the role of China. China is now the largest producer of research in the world and it’s long overdue that the West began to reach out and advocate for preprints and open science across that region. China is poised to cause significant disruption to the OA movement and science communication more broadly. As such, it was a somewhat strange omission.

    Me at the openRxiv celebration event, photo courtesy of CZI

     

    Overall, the meeting was filled with excellent discussions and left me feeling very excited about the direction of openRxiv and the future of preprints.


    See an overview from Curvenote for their experience of the openRxiv celebration event.

    Personal note

    I really needed this specific in-person gathering and the incredibly warm reception I received is one of the biggest reasons I care so deeply about this community – they’re very welcoming to new people (not that I’m new anymore) and passion/effort is seen and recognised. It’s also a space in which I feel like I can have a big impact. Given how this year has panned out for me, it was hugely heartwarming to be surrounded by supportive people and to see the reach I’ve had with my work so far. I also got the opportunity to make some new connections and meet some people I really should have already met.

    There was a lot of enthusiasm for Rippling Ideas with many people commenting on how needed such an effort is – sentiments that were also shared earlier this year at the metascience conference.

  • Open Science and Mental Health

    Open Science and Mental Health

    It’s world mental health day (10th October) and it feels like the perfect moment to discuss mental health in academia.

    We all know that academia can be challenging, so say the least. Between endless deadlines, tough competition, and the constant pressure of “publish or perish”, it’s no wonder so many researchers struggle with stress and burnout – rates of up to 3x the average population. The system often feels designed to test endurance rather than curiosity.

    But what if (some of) that pressure wasn’t inevitable? What if the way we share and talk about science could actually make the research experience kinder and more human?

    That’s where preprints and open science enter.

    Sharing early, with you in charge

    One of the hardest parts of academic publishing is the waiting. You can spend months or years sending a paper from journal to journal, collecting rejections, and wondering if anyone will ever see your work. It’s demoralising and, in the worst cases, can destroy careers before they’ve even begun.

    Preprints change that. They let you share your research right away, before it’s formally published. You get to say when your work is ready for public dissemination, not editors or peer reviewers or any other gatekeeper – You. That simple act of putting your work out there can be a huge mental relief. You’re no longer stuck waiting, and you can finally point to something concrete that proves you have been productive.

    Feedback that feels supportive

    Another perk of sharing early is the feedback. Instead of waiting for anonymous reviewers behind closed doors, you can get comments from the wider community which often faster, and friendlier.

    The open science world has a growing culture of constructive feedback and collaboration. Knowing that people are engaging with your work, building on it, or even just appreciating it can counteract that sense of isolation so many researchers feel.

    Collaboration not competition

    Open science also nudges the culture toward teamwork instead of rivalry. When people share their data, code, and preprints openly, it sends a clear message: science works better when we help each other and are collaborative.

    That’s a subtle but powerful shift. Instead of guarding results until publication, open practices make it easier to connect, learn, and collaborate. And that sense of community can be a serious mood booster. It also just so happens to be better for your career and science too!

    A lifeline for early-career researchers

    For early-career scientists, preprints can be a game changer. You don’t have to wait years to have something to show for your work; your preprint is your publication record. You can use it to demonstrate progress, attract collaborators, or even strengthen job and grant applications.

    It’s empowering to take control of your research story instead of waiting for the system to validate it. That confidence and autonomy can go a long way in protecting mental well-being.

    Normalising the messy

    Finally, open science helps normalise the messy, imperfect, reality of doing research. Sharing data, methods, and even null results makes it clear that science isn’t a string of flawless breakthroughs but rather trial, error, and (a lot of) persistence.

    When failure becomes part of the story, not something to hide, it takes away a lot of the shame and self-doubt that can eat away at researchers over time.

    Toward a kinder science

    Preprints and open science aren’t just about faster sharing or better reproducibility. They’re about changing the culture, making research more transparent, collaborative, and humane.

    They remind us that science is a human endeavor, done by real people with real struggles. And when we make space for openness, we make space for well-being, too.

    And that is the kind of revolution academia needs most.

    Image credit: Photo by Total Shape on Unsplash

  • False flags of trust

    False flags of trust

    We recently published our first external article in which we proposed a range of trust signals that could replace outdated and poor proxies. You can read that article here. But why do we need to change the current proxies?

    False flags of trust

    The current markers of trust in a given scientific article are based on proxies that have no bearing on individual articles, on name recognition or on a deeply flawed peer review system.

    These “false flags” of trust ultimately lead to a highly unhealthy research culture, where researchers are assessed on things that they have little control over. These false flags also cause distrust in research when they fail to meet the (incorrect) expectations placed upon them.

    What are these false flags?

    There are a range of poor proxies currently used as signals of “trust” for any given article.

    Journal name/brand

    Many researchers and journalists still assume that if an article is published in a prestigious or well-known journal, its quality and trustworthiness are guaranteed. However, this overlooks the actual content and rigour of any individual study. Brand recognition can be easily manipulated or may reflect historical prestige rather than current quality. There have also been many cases of “reputable” brands ignoring their own policies to publish headline-grabbing or sensationalist research.

    Editor name

    The presence or reputation of editors is often implied as a mark of credibility, but few readers have insight into their selection process, expertise, or potential biases. Often, editor names are not even transparently provided, which undermines their usefulness as a trust signal.

    Impact factor

    This metric has long been criticised as a flawed proxy. The impact factor reflects the average number of citations a journal’s articles receive, not the quality or reliability of any single paper. Journals can also influence impact factors through editorial policies, leading to distortion and unfair emphasis on this number.

    Worse still, single articles can have dramatic impacts on the journals impact factor

    H-index

    Although not a metric for an individual output, the H-index is often used to measure an individual researcher’s impact based on their number of publications and citations. However, this metric can be misleading when applied as a proxy for trust in individual articles. It favours quantity over quality and can be influenced by self-citations or citation circles. It only counts academic citations, ignoring impact from textbooks, policy documents or impact beyond academia. Moreover, it does not account for the context or significance of the citations, making it a poor indicator of the true trustworthiness of specific research outputs.

    Peer review

    Traditionally seen as the gold standard of providing trust in research, the peer review process is neither standardised nor immune to biases, conflicts of interest, or superficial assessments.

    Articles can pass peer review but still contain serious errors or misinterpretations. With the increasing use of AI, this is becoming ever more apparent (rat testicles as a now infamous example). The problem is not with egregiously poor use of AI but with the assumption that peer review should protect the literature. Peer review was never designed to detect fraud or even gross defects.

    Emerging peer review reforms seek to improve transparency, but current peer review remains an imperfect proxy for trust. Even within the reform efforts, none of these substantially improve the trust aspect to peer review with most focussing on improving efficiency in the process.

    Why change matters

    Clinging to these false flags perpetuates a system where researchers are rewarded or penalized based on factors disconnected from the actual quality and reproducibility of their work. This can lead to exaggerated claims, publication bias, and even misconduct, undermining public trust in science.

    Moving beyond these proxies opens the door for a research culture that values openness, rigour, and accountability. This would better serve the scientific community and society by highlighting the true merits of research and fostering innovation grounded in trustworthiness.

    What could replace these proxies?

    In our article, we proposed several alternative trust signals designed to focus on the article itself rather than external, often irrelevant indicators. You can read more in the article and on our website as this is one of our dedicated projects.

    Adopting such signals can help shift the focus from where research is published to what the research actually achieves — increasing reproducibility and ultimately trust in scientific outputs.

    We’re currently putting together a proposal “Beyond peer review” for funding for an in-person meeting to further discussion of trust indicators and a vision of a more robust system of determining an articles reliability.

  • Join our board!

    Join our board!

    We’re seeking expressions of interest to join our inaugural board.

    If you are passionate about our focus areas and values then come and join us! Help us establish ourselves and grow over the next 3 years.

    Expressions of interest are open until 31st October 2025.

    We are looking to appoint up to 12 board members, for 3 year terms (renewable, and the appointments may be staggered). The board will meet 3-4 times a year and there are opportunities to form committees with more regular meetings.

    Why join our board?

    We are an organisation firmly rooted in a evidence and values-driven approach. We know that people are the key to real change and so this underpins everything we do.

    In joining our board you will:

    • Help to be at the forefront of improving academia, publishing and trust in research
    • Contribute to real and meaningful change – we don’t do empty words here
    • Have a global impact – we want to ensure that we actively include global voices within our community and actions
    • Shape the future of academia and publishing. Think beyond the current conversations
    • Establish a young and vibrant organisation with advocacy at its core, with significant opportunities to be actively involved

    We are run by one of the world leading experts on preprints and academic culture with a proven track record in leading organisations to success.

    What have we achieved so far?

    Over the past two months alone, we have produced best practices & guidelines for AI & preprints, a guide for preprints & new PIs, 4 explainer videos, 2 external articles, 3 blog posts, 2 talks and much more; including becoming a registered non-profit company in the UK.

    Our next steps are to form a board, become an official charity and secure funding. We’re developing a draft strategy centred on our inextricably linked focus areas; preprints/open science, trust in research and academic culture. We believe that these areas must be changed together, not in isolation.

    What are we looking for?

    We’re looking for people with expertise and how are passionate about any of the following:

    • Preprints and open science (in the Life sciences)
    • Academic culture
    • Recognition and rewards in academia
    • Advocacy and change making
    • Trust in research (particularly moving beyond peer review)
    • Increasing equity and accessibility in academia and publishing
    • Science communication
    • Leadership
    • Community building and collaborative approaches

    We’re also specifically looking for somebody with experience for the treasurer role and who knows UK non-profit/charity law.

    Expression of interest form

    The expression of interest form should take 5-10 minutes to complete and includes the following questions (all 250 words max):

    How would you help us to achieve our vision?

    How does you experience fit with focus areas?

    What are the ways in which we can drive culture change towards greater adoption of preprints, expanded recognition, a healthier academic culture and improving trust in research? 

  • Best practices for preprint review services using AI

    It’s peer review week and this years theme is “rethinking peer review in the AI era”.

    To contribute to this theme, we have designed a series of best practices for preprint review services using AI/LLMs. These best practices to support the responsible and transparent use of such tools.

    Who is this for?

    The best practices are designed for preprint review services, preprint evaluation services and platforms hosting preprint reviews.

    The guidelines do not object to the use of AI but help to ensure responsible and transparent use. A separate set of best practices are focussed on the authors of preprint reviews/evaluations.

    Guiding Principles

    • Transparency: AI involvement in reviews must be openly disclosed to all parties, including details of tool capabilities, limitations, and approved uses.
    • Human Oversight: Qualified human reviewers must make all final decisions; AI tools should merely support, not replace, expert assessment.
    • Security and Confidentiality: All AI tools must protect data privacy, and third-party tools need to meet platform standards.
    • Quality and Integrity: AI may assist with tasks such as pre-screening and summarizing, but findings must be verified by humans and reviewer accountability is key.
    • Ethical and Unbiased Review: Review services must monitor and mitigate any bias introduced by AI and ensure equitable treatment of all authors.

    Implementation Guidelines

    • Establish public AI policies detailing acceptable tools and uses, as well as disclosure and auditing procedures.
    • Include metadata fields and confirmation checkboxes in review forms to ensure reviewers declare and own their use of AI tools.
    • Provide targeted training and guidance for reviewers and staff on responsible AI use and policy compliance.
    • Audit AI tools and review outputs regularly for compliance, bias, and misuse.

    Check out our actionable steps to implementing these best practices

  • Moving away from poor proxies; small steps you can take right now

    Moving away from poor proxies; small steps you can take right now

    The reliance on proxies such as Impact Factor, H-index and journal names is damaging to the academic culture and decrease trust in research. When you are assessed by such poor mechanisms it can feel almost impossible to break free from the system. However, even small, gradual, steps can make a big difference. Here are 5 smaller steps you can take to begin moving away from poor proxies

    Don’t list IF or H-index on your CV or online profiles

    There’s a good chance that you’re already doing this as not everyone lists their H-index or journal impact factors on their CVs or online profiles. However, if you are currently listing those (or your institution does) then removing them is a simple step. In doing so, you help signal that these are not useful metrics and further discourage their use.

    To go a step further, you may want to replace these poor proxies with altmetric data, context-dependent citations or even a brief narrative of each output.

    Remove journal names from your CV and online profiles

    A step above removing poor proxies is removing journal names for your CV and online profiles. I suspect almost all of us list these on our CVs and/or online profiles – I certainly have! The benefit of removing journal names is that you encourage people to focus on the actual content of your work rather than the venue in which it is published.

    You can hyperlink directly to the articles or use DOI’s in place of journal names.

    Don’t ask where a colleague is publishing

    Very much linked to the above, when a colleague has a manuscript ready for submission or has recently published, instead of asking where they published, ask about the work; what they published.

    This again shifts the focus from poor proxies and is also a much better (and nicer) question to ask. Asking where someone will/has published feeds into the pressure and, at times embarrassment, of journal choice. Asking what someone has published opens up a conversation about their hard work and may help colleagues feel like their work is more recognised.

    Remove journal names from slides

    Just as listing journal names on CVs and online profiles, using them in presentations can send the message that where you publish is more important than what you did.

    For example, HHMI Janelia have a policy where speakers use identifiers like PMIDs instead of journal names in their presentations. Although a small change, this sends important signals to ECRs that the content is more important than the journal name.

    Encourage institutional change

    Beyond individual change, you can also strongly encourage your institution to adopt various changes. A great starting point is to get your institution to sign up to declarations such as DORA and COaRA. Whilst simply signing up isn’t much of a step, it can be a great starting point for initiating discussions about research culture and poor proxies. In fact, your institution may already be signed up to these initiatives.

    If you sit on hiring committees or faculty committees, you can also encourage a move towards more appropriate assessments, such as narrative CVs or having applications remove journal names and other poor proxies.

    If you aren’t in a position to encourage such larger changes at your institution/department, then a small step you can take is to simply talk to students and postdocs about why these metrics are poor and the damage that they do to science. There are some slides you can use here.


  • 15 tips on making your lab a better place for ECRs

    15 tips on making your lab a better place for ECRs

    Download as an infographic

    Academia can be an exceptional career path but it can also be tough, especially for Early Career Researchers (ECRs). Having navigated the world of academia myself, I know first-hand what a difference it makes when departments and labs consciously create a supportive environment for ECRs and the damage done when they don’t.

    Here are 15 actionable ways you can make your team a better, supportive, and more inspiring place for the next generation of researchers. Some are quick wins, others are going to require some work. All of them matter.

    1. Invite ECRs to Give Seminar Talks (And Cover the Costs, Upfront)

    ECRs need platforms to share their work and build their networks. Regularly invite them to present at seminar series. But don’t just offer them a slot; cover their travel (ideally up front!), accommodation, and, if possible, honoraria. Here’s a great example of how it can be done. Starting or changing your seminar format to one specifically for ECRs would be a huge boost and move us away from those same old boring conference talks.

    2. Prioritise Dedicated Training Over Journal Clubs

    Replace some traditional journal clubs with real training: data analysis workshops, grant writing bootcamps, public peer review sessions, presentation skill seminars, and meta-science discussions. ECRs are hungry for skills beyond technical lab work. Give them that and not only will your lab benefit from more rounded individuals but the ECRs will be better placed for alternative careers and academia.

    3. Actively Encourage Collaboration

    Encourage and facilitate collaborations within your lab, your department, and outside your institution. Networks are critical for scientific growth and career development.

    Insular labs stagnate; collaborative ones thrive

    4. Foster Public Speaking & Writing Opportunities

    Push your ECRs to give talks (both academic and public-facing) and to write (opinion pieces, reviews, blogs, preLights, etc.). This builds their CVs and confidence, and opens up alternative career avenues. Make introductions, offer edits, and celebrate their successes.

    5. Host Annual Career Development Days

    Dedicate one full day each year to reviewing CVs, prepping for job or fellowship interviews, and working on grant applications together. Share your insider knowledge of funding opportunities, especially those small pots of money for independent ECR projects.

    6. Relieve Unnecessary Pressure and Stand Up for ECRs

    You may not be able to pay everyone more, but you absolutely can use your position to shield ECRs from unreasonable demands, departmental politics, or unnecessary admin. Don’t forget what it felt like at their career stage. Challenge institutional inertia on their behalf.

    7. Treat ECRs as the Experts They Are

    Most ECRs are postdocs or just a lucky break away from a permanent post themselves. Treat them as equals and experts, not just as helping hands. If you need reminding of what the postdoc experience can be, read more.

    8. Listen—and Act—on What ECRs Say They Need

    When ECRs say they need equipment, software, or support, believe them. Prioritise these needs and advocate for necessary resources. Their ability to thrive depends on it and so does your lab’s output.

    9. Cover Upfront Costs Instead of Passing the Burden Down

    Departments (not individuals) should absorb upfront costs, or you risk excluding those who can’t afford a financial hit. As a PI, advocate fiercely for this at department level. This isn’t just about bringing academia in line with normal businesses, it’s about equity and inclusion.

    10. Make Your Space Safe and Inclusive, Always

    Speak up against any abusive behaviours or toxic politics even if (especially if) it means calling out another PI. Silence protects abusers. ECRs should never have to navigate harassment, discrimination, or power games just to do their job.

    11. Avoid Internal Competition and Value Individuality

    Don’t pit ECRs against each other on the same projects. Take time to know each person’s goals and working style. Support those who want to stay in academia and those considering other paths, equally and openly. Avoid favouritism at all costs as it will slow progress, damage your labs culture and result in people leaving.

    12. Design and Update Individual Training Plans

    Every ECR deserves a clear, personalised training plan, reviewed annually. Training is more than just learning a technique, it’s about building independence, transferable skills, and professional confidence.

    13. Take Leadership Courses Yourself

    If you run a lab, commit to learning good management and leadership! Consider formal courses in leadership and people management (this should really be mandatory for all PIs and provided by institutions). You can’t know what you were never taught; investing in yourself helps your team too. We’re developing such courses so keep your eye out.

    14. Preprint Your Lab’s Papers

    Timely outputs are critical for ECRs applying for jobs, fellowships, and grants. Preprints can make all the difference, saving 6+ months. Make preprinting your default and encourage your team to get their work out there when it’s ready.

    15. Model Integrity and Open Science

    Lead by example: be transparent in your research, foster a culture of sharing, and have regular discussions about research integrity and communication. How you behave sets the standards for those you train and the next generation of PIs.

    Bonus: Small Changes Really Do Matter

    It’s easy to think that improving life for ECRs in your lab or department can’t change the system. It absolutely can. For those ECRs in a genuinely good environment, it’s transformative and your example sets new expectations for others. Change is slow, but it always starts somewhere.

    If you’re a PI, lab head, or department leader, you might already be doing some of these things. If not, now’s the time to start. It matters more than you might realise for your ECRs, your science, and the future of academia.

    If you found these suggestions useful, please share, and let’s build a better, kinder academic world together.

  • The Open Access Rainbow

    The Open Access Rainbow

    Here is a quick and simple guide to the “rainbow” of open access terms.

    Bronze

    Freely accessible journal articles on publishers’ servers, but without clear details on reuse

    Examples

    • Archives of subscription journals

    Limitations

    Without clear licences, article reuse is highly limited

    Gold

    First publication as an article in Open Access journals. Publication costs are covered by authors, usually in the form of APCs.

    Examples

    • PLOS ONE
    • Frontiers Journals

    Limitations

    Inequitable APC costs and the propagation of a highly damaging business model

    Hybrid

    Subscription-based journals that offer authors the option of paying a fee (APC) to make their individual articles freely available online under an open access license.

    Examples

    • There’s a long list here

    Limitations

    Institutions are often charged for APCs and subscriptions, resulting in them paying twice

    Diamond

    A publication is free of charge both for readers and for authors.

    Examples

    • There’s a long list here

    Limitations

    Publication costs still exist and sustainability is currently an unsolved issue

    Green

    Secondary publication of publications from access-restricted journals or books on institutional or subject-specific repositories.

    Examples

    • Preprint servers
    • Author websites

    Limitations

    Publications may only be freely available after an embargo period

    Photo by Gabriela on Unsplash

  • AI & its role in peer review

    AI & its role in peer review

    The rise of AI, or LLMs to be specific, has had a significant impact on scientific publishing. From hallucinated references to nonsensical images (that rat penis) and peer reviews devoid of human oversight. More recently, a small number of preprints hosted on arXiv have been discovered to be concealing AI prompts in white text designed to cause chatbots to produce positive reviews. Whilst this undermines the review process1, it also highlights that peer reviewers are using AI tools; whether the research community wants them to or not.

    How can AI be used responsibly in peer review?

    If peer reviewers are going to use AI, then rather than attempting the futile task of preventing this, we should instead promote responsible and appropriate use. So what is appropriate use of AI in peer review?

    Finding reviewers

    AI can aid in finding reviewers and matching them to specific manuscripts. This could improve the efficiency of this aspect to the reviewing process, aid editors workloads and reduce the number of researchers rejecting review invitations. Some preprint review services are already utilising AI in this way and this is likely to expand.

    Improving grammar in peer review reports

    One of the best uses of AI is by non-native English speakers. AI can help reduce the discrimination that such researchers face. This should only be done after the review has been written, by a human, and for grammar only. The review would still need to be checked for clarity afterwards too.

    Automated checks

    A great use for AI is to automate some tasks to reduce the burden on reviewers and editors; for example in detecting plagiarism. Another great use is to use AI to surface any previous (public) reviews. This could help inform the current reviewers or even be used by the editor to inform their decision.

    Summarising human-authored reviews

    Transparent peer reviews, posted alongside articles is an important step in improving trust in the scientific process and in individual articles. However, the majority of these reports go unread and are not being utilised in the best way possible. AI could be used to provide summaries of human-authored peer review reports, thereby providing the important context to readers.

    Flagging preprints that may need peer review?

    A potential new use for such tools would be to flag preprints that may need human scrutiny. This could relieve the stress in the system by avoiding reviewing every output, something that is already unsustainable and failing.

    How else could we responsibly use AI in peer review?

    Important considerations when using AI

    Whenever AI is used, and in whatever capacity, transparency is vital. Use should be declared, including which LLM was used and how exactly it was used. This is important as it provides some degree of confidence that the reviewer or service has used the AI responsibly and checked the results. Indeed, this human-responsibility and oversight is important for all uses of AI; unchecked and unverified content is a large element of AI use that is damaging. Another important consideration when dealing with manuscripts that are not public relates to confidentiality. These manuscripts should not be shared with any third party, which includes uploading them to any LLM model.

    We’re currently creating best practices for AI use by preprint servers, preprint review services and authors of both. Want to collaborate with us on this? Please get in touch!


    1. AI is revealing just how flawed traditional peer review is, supporting the many studies that have investigated issues with peer review.

    Photo by Emiliano Vittoriosi on Unsplash

  • How to get more involved with preprints and open science

    How to get more involved with preprints and open science

    Preprints are revolutionising the way we share and communicate scientific findings. They have numerous benefits and advantages for all stakeholders but particularly for ECRs. If you are an ECR you need to be posting preprints. If you train or are responsible for ECRs then you need to be making sure you facilitate preprinting of their work.

    But how can you get more involved in this fast moving world?

    Use preprints

    OK I’ve just mentioned this one but not only should you be posting preprints yourself but you should be reading and citing other preprints in your field. This will keep you 1–2 years ahead of those who only read published papers. When you do publish, choose open access journals and those that are more friendly to changing the broken system.

    Host/take part in preprint journal clubs

    Journal clubs can be useful and are often a staple in “training” for ECRs within a bioscience department. Stop picking CNS papers because they’re flashy and start using preprints to be at the true cutting edge of your field. To make them even more useful you should spend a little extra effort on writing up the discussion as a comment for the authors. This way your journal club is helping to advance preprint use and also advance science by helping authors refine and improve their work.

    Share data and methods openly

    Tied into using preprint but if you have a dataset or useful methods, upload these to repositories when you post the preprint. Sharing code openly can even lead to new collaborations and significantly improve your own work — we found this with our COVID papers where sharing openly led to a collaboration for the first paper and then posting a preprint led to our second paper being co-published, again making the conclusions much stronger.

    Educate yourself (and others)

    It’s so surprising how many academics (including “esteemed” professors) who just don’t understand the history of our publishing system or where peer review comes from. This is vital in understanding the problems within the system and why it needs to change. There’s a lot of survivorship bias in academia and looking back can help us move forwards.

    Follow open science leaders

    Some of the brilliant people who are leading the change towards open science and preprint use are very active on social media. On BlueSky, you can follow the Preprints and Metascience feeds.

    Get involved with communities

    This is perhaps one of the best ways of getting more involved in preprints and open science.

    PREreview —  community and training focussed on increasing equity in preprint peer review. Recommended platform for uploading community or journal club reviews of preprints.

    preLights — preprint highlighting that allows you to write about interesting preprints and collaborate with others in the community. An excellent initial (active) step into the world of preprints.

    Preprints in Motion — podcast focussed specifically on highlighting preprints and the ECRs behind them in addition to discussing the wider issues in academia. Contact preprintsinmotion@gmail.com

    Talk to co-workers about preprints

    Now you’re using preprints, you’re writing about them or have been involved in the fellowship programs above. Get out there and tell everyone why they should be preprinting and making science a better place for all! Spread the gospel!

    If you’ve posted an interesting preprint or read one recently you can also highlight it to Preprints in Motion for a full podcast episode focussed on the preprint and ECR.

    Attend open science events

    There are many open science events you could attend such as conferences and workshops from FORCE11 and various universities (e.g. Sheffield University’s OpenFest).

    Write about preprints

    This may be through preLights but can also be more casual or opinion pieces in science magazines and journals. I’d strongly recommend preLights because not only is it a great community but it helps establish your own name in the preprint sphere.

    Start your own initiative

    We’re always happy to discuss ideas and provide support in some exciting new initiative led by you!