Tag: publishing

  • The Open Access Rainbow

    The Open Access Rainbow

    Here is a quick and simple guide to the “rainbow” of open access terms.

    Bronze

    Freely accessible journal articles on publishers’ servers, but without clear details on reuse

    Examples

    • Archives of subscription journals

    Limitations

    Without clear licences, article reuse is highly limited

    Gold

    First publication as an article in Open Access journals. Publication costs are covered by authors, usually in the form of APCs.

    Examples

    • PLOS ONE
    • Frontiers Journals

    Limitations

    Inequitable APC costs and the propagation of a highly damaging business model

    Hybrid

    Subscription-based journals that offer authors the option of paying a fee (APC) to make their individual articles freely available online under an open access license.

    Examples

    • There’s a long list here

    Limitations

    Institutions are often charged for APCs and subscriptions, resulting in them paying twice

    Diamond

    A publication is free of charge both for readers and for authors.

    Examples

    • There’s a long list here

    Limitations

    Publication costs still exist and sustainability is currently an unsolved issue

    Green

    Secondary publication of publications from access-restricted journals or books on institutional or subject-specific repositories.

    Examples

    • Preprint servers
    • Author websites

    Limitations

    Publications may only be freely available after an embargo period

    Photo by Gabriela on Unsplash

  • In defence of preprints

    In defence of preprints

    You may have come across this remarkably ill-informed blog post. Given that this podcast advocates for preprints in the life sciences, we are compelled to issue a direct response to the blog post. Leadership does not come from remaining silent.

    You can listen to the episode discussing the blog post here, below is a clearer, evidence backed, rebuttal.

    Preprints serve scientists and the pro-science movements

    The click-bait title of the blog post is designed to be provocative. However, it also immediately highlights the authors’ lack of understanding of preprints. Preprints are most beneficial to early career researchers who benefit from being able to document and evidence their work when they are ready for it to be shared rather than waiting 6-12 months in an opaque peer review process. During public health emergencies, preprints quite literally save lives, with around 40% of the early research being shared first as a preprint; this occurred during the COVID-19 pandemic and recent mpox outbreak. This is far from serving an “anti-science agenda” as claimed by the blog post author.

    The benefits of preprints are well known by now so I won’t retread that ground here. Instead, I wish to focus on the blog post and it’s multitude of problems. Before I do however, it is worth highlighting that the author left academia to create a new journal (stacks journal) and has a huge bias in favour of traditional peer-review. Surprisingly for someone involved in the science communication space, the author comes across highly uninformed on open science and the history of publishing/peer review. This is, perhaps, due to the manner in which the blog post was written, rather than a genuine lack of understanding but is concerning regardless. The blog post fails to provide any argument for that preprints are “serv[ing] the anti-science agenda” and instead is a few misguided and incorrect opinions. There are accusations and inferences that are made without any evidence or support.

    The blog post presents a highly out-dated view that looks backwards rather than to the current moment, never mind the future.

    Anyone can post anything as a preprint

    The author claims that “nearly anyone can post an official-looking preprint” and that “preprints are like blog posts”. Most reputable preprint servers have basic screening processes in place. This reduces the sharing of pseudoscience, poor quality studies and other non-science outputs. This also identifies controversial work that may require a closer examination from a process such as peer review. Preprints are most often (although not always, depending on the server) full research articles with materials, methods and results. One point worth highlighting is that authors are often still overly cautious and regularly fail to include datasets or supplementary files, something I hope will change sooner rather than later. However, these screening processes act to prevent “anyone posting anything” onto preprint servers.

    This may be a good point to define “preprint” as the blog post author does not seem to understand what preprints are. A preprint is a manuscript shared online by the authors prior to journal-organised peer review. They are citable, have a DOI, are permanent records, may be peer reviewed and are shared when authors believe that the work is ready for public consumption. They are very, very, rarely preliminary work with the majority of preprints being complete scientific articles.

    The editorial standards of the blog post that this author has written fail even the most basic standards for ethics and integrity in scientific writing. If this is the standard for a blog post then the author makes his own point; preprints really are nothing like blog posts (they’re vastly superior).

    Preprints are comparable to the published literature

    A central tenant to the authors argument is that preprints are poor quality. This is a persistent myth and a common argument from people who do not understand preprints. However, this is just a myth and there is a growing body of evidence addressing this point. Indeed, I have authored such work myself – work that other Scholarly Kitchen (SK) associates found acceptable to publicly attack and even go so far as to email the editor and journalists telling them it was fatally flawed rubbish (it actually stood up to intense scrutiny but this highlights some personal bias I have against SK and reveals how some of those involved behave in relation to evidence they don’t like)!

    This growing body of evidence, from multiple, independent groups using a variety of different methods and approaches consistently draws the same conclusion; that preprints are comparable to the peer-review literature. When preprints do undergo changes, these are most often limited. In simple terms, peer review does indeed help to make an article better, but only in a very limited manner. Additionally, that improvement is around 5% “better”. For a system that can cost $4 billion globally and delay scientific progress by an average of 6 months, 5% may not be worthwhile.

    preprints are comparable to the peer-reviewed literature

    Given this, it would be egregiously anti-scientific to suggest that all, or even most, preprints are low quality or nothing more than glorified “blog posts…[on] social media platforms”. One valid question does remain; what about the 30% of preprints that are not eventually published? I have an active project on-going to address this question. However, any preprint expert will likely answer this in the same manner for now. There are many reasons why a preprint may not be published. This could be that the preprint is negative data or small, one-off datasets that are difficult to publish and often not worthwhile for authors, despite their scientific value. Preprints may be the final intended destination of the work (we interviewed one such author previously) or the authors may not be able to afford APCs at journals. None of those reasons are that the preprints are low quality or unreliable anti-science. I will agree that there are low-quality, unreliable preprints, such as this one (which was withdrawn within 48 hours – quicker than any journal would act). However, this equally applies to peer-reviewed literature, which is in many ways considerably more dangerous.

    The author also suggests that reporting on preprints is “alarming”. However, data shows that journalists have adapted well to preprints and generally report on preprints in an ethical, appropriate, manner and with greater standards than the Scholarly Kitchen possesses.

    Peer review does not protect the scientific literature nor is it designed to

    “For decades, scientists have relied on peer review to ensure scientific knowledge is built on a foundation of rigor and credibility. However, preprints are adding to the crumbling of that foundation”. The very first line of the blog posts asserts that peer review, and peer review alone, is providing the foundation of rigour and credibility to science. I’m not too sure what the author thinks of all those scientific breakthroughs that occurred prior to the 1970’s, before peer review was standard, but one can infer from this blog post that we should maybe not trust papers such as the one describing the DNA double helix from Watson & Crick or much of Einsteins work (not peer-reviewed, much to the surprise of many who haven’t studied the history of peer review). Peer review has only been commonplace since the 1970s. However, more fundamentally, this highlights a common misunderstanding of peer review; that it is meant to protect the literature in some way by acting as a quality control (QC) step.

    One common problem that is fuelling the anti-science agenda is the veneer of trust that peer-review (falsely) offers.

    Peer review is not designed to detect fraud, a fundamental element for protecting the literature as part of a QC. Indeed, there have been some very high profile examples of fraud in some of the biggest and most “impactful” journals, including the Surgisphere scandal. This work was published, not because peer review failed, but because peer review was never designed to detect fraud to begin with. This is further illustrated through the work of expert sleuths (forensic scientometrics) who regularly detect manipulated data, spliced gels and other abnormalities that indicate a particular result is less than trustworthy.

    Peer review is also consistently poor at identifying gross defects in papers, as evidenced by numerous studies. This is probably not surprising given the almost complete lack of training in assessing other people’s work or in identifying defects. Add the fact that reviewers are overburdened, with less time to dedicate to review activities and the likelihood of even a good, detailed reviewer missing gross defects increases. Peer reviewers are not forensically examining manuscripts, paid or not. Some evidence suggests that paying peer reviewers slightly increases turnaround times but there is no evidence that paid reviewers spend longer assessing the manuscript or put more effort into the process. Indeed, the evidence that does exist is that the paid-for reviews are not significantly different to the non-paid for reviews.

    Together, these effectively negate the protective aspect of quality control that many people falsely believe that peer review is designed to achieve. This does not render peer review pointless or something to avoid entirely. Peer review does help authors improve their work (as discussed), but collectively we need to better explain the limitations of peer review and rely less on this as the lone bastion of rigour in science.

    One common problem that is fuelling the anti-science agenda is the veneer of trust that peer-review (falsely) offers. The author makes no real effort to illuminate this point. It is due to this mis-use and mis-understanding of peer review that there does need to be a much greatly improved system of trust for research.

    A constellation of trust signals

    The authors’ suggestion that trust should come from a deeply flawed peer review process is short-sighted, failing to account for the already changing landscape. What is needed are a range of trust signals that, when taken together can provide a sense of trust in an individual research output. Peer review may very well be one of those signals but it should not be the only one. Ideally, these trust signals would be both static (reflecting fundamental qualities of the work) and dynamic (updating as new evaluations emerge). They would be applicable to a broad range of outputs and extendable to researcher assessment. A system of diverse and dynamic trust signals that accompany preprints, would offer a multifaceted evaluation framework capable of evolving over time. In the 21st century, science publishing can no longer communicate findings solely to other scientists. Accordingly, the metrics associated with publishing must reflect reality, whilst encouraging best practices and rigorous science.

    Preprint feedback occurs privately

    One piece of evidence the blog post does cite is a study that suggests only 7% of bioRxiv and medRxiv preprints receive comments. As the author is cherry-picking their citations and evidence, what they fail to communicate in the blog post is that most feedback occurs privately. More recent data suggests that 60% of surveyed authors received feedback on their preprint, rising to 70% of authors who actively sought feedback. I would much prefer public feedback, but that’s not the issue in the blog post. The blog also fails to account for any form of non-traditional feedback, such as would occur through social media channels or preprint review services such as PREreview or preLights. It’s a confusing decision to focus on preprint commenting, a feature well known to be underutilised. On bioRxiv/medRxiv, there is a panel that brings together reviews and context for each preprint (shown below). This panel is also a great example of how trust signals could be surfaced and presented to readers and also brings together wider, non-traditional, mechanisms of feedback.

    The bioRxiv/medRxiv Reviews and Context panel present alongside all posted preprints

    Scientific publishing is not sustainable in its current form. The system requires significant changes that must occur alongside changes to research(er) assessment, hiring and promotion practices and broader science communication. Academia itself also needs to change, moving away from narrow training of new scientists whilst offering no academic positions. The entire system is breaking and what is very much not needed are more poorly informed efforts that distract from the immense amount of work needed to create a better future for academia and for science.

    Disclaimer

    I have chosen to match the tone of the original blog post to provide the most appropriate response I can rather than a dry, evidence-only, article. I am not attacking the original author but when somebody creates something they claim to be built on ethics and integrity, it is reasonable to highlight the hypocrisy on display. Otherwise, I have simply used the authors own words to frame my rebuttal. It is difficult to combat a blog post with such little scientific standards without my response positively dripping in sarcasm. Whilst I would hope the original author would reflect, I have not seen any evidence to this effect and it is most likely a lost cause. Indeed, this blog post has raised significant concerns around the authors Stacks Journal and the potential lack of editorial and scientific standards employed. Hopefully, the blog post has not caused any real damage and, based on some of the social media responses, it seems clear that people see the author as biased and poorly-informed. A similar article that attempts to initiate a similar discussion around peer review can be found here – this is a much more informed article for those interested.

    Photo by J A Coates, “Sintra”