Frank Houghton1
doi: http://dx.doi.org/10.5195/jmla.2022.1441
Volume 110, Number 2: 233-239
Received 11 2021: Accepted 01 2022
ABSTRACT
The moral panic over the impact of so-called predatory publishers continues unabated. It is important, however, to resist the urge to simply join in this crusade without pausing to examine the assumptions upon which such concerns are based. It is often assumed that established journals are almost sacrosanct, and that their quality, secured by peer review, is established. It is also routinely presumed that such journals are immune to the lure of easy money in return for publication. Rather than looking at the deficits that may be apparent in the practices and products of predatory publishers, this commentary invites you to explore the weaknesses that have been exposed in traditional academic journals but are seldom discussed in the context of predatory publishing. The inherent message for health and medical services staff, researchers, academics, and students is, as always, to critically evaluate all sources of information, whatever their provenance.
Keywords: predatory publishing; peer review; academic quality; scientific misconduct; Elsevier; academic journals.
A review of recent publications on the issue of predatory publishing and predatory journals reveals an ongoing focus on this topic [1–7]. However, it is important that assessments of the potential impacts of such predatory journals are not overplayed, and it is likely true to say that there has been a certain degree of hysteria over the would-be threat from such journals [8].
First, it must be made clear that the world of commercial academic publishing is a highly profitable oligopoly [8, 9, 10]. It must immediately be asked, therefore: Whose interests are served by the moral panic over predatory publishers? The Oxford English Dictionary describes a moral panic as
a mass movement based on the false or exaggerated perception that some cultural behaviour or group of people is dangerously deviant and poses a threat to society's values and interests. Moral panics are generally fuelled by media coverage [11].
Predatory publishing is often portrayed as threatening the very basis of science and, by extension, our way of life. It must be acknowledged that there is an intense academic focus on predatory publishing within the academic literature. This may almost now be described as an industry in its own right. It appears that we now have a symbiotic relationship in which predatory publishers exist due to mainstream journals, and the mainstream journals themselves now use the existence of such predatory journals as a focus for further publishing. This stark parallel may be seen in Figure 1.
Figure 1The symbiotic nature of the established & predatory publishing nexus
Despite the attention paid to the threat of predatory journals, examinations of citations from them indicate that the actual impact of such publications appears limited [12–14]. As such, the reality may be that their impact is similarly marginal. Bell suggests that predatory journals are not really a threat but instead should be treated as parody [8]. Instead, perhaps solicitations from such journals should be met “with amusement (and annoyance) rather than alarm” [8]. An ongoing concern is the potential conflation of concerns about, and attacks on, predatory publishing and open access (OA) publishing [15].
It must also be acknowledged that some of the venom that has been targeted at predatory journals and some of the general language used in the field to discuss the issue may have racist overtones [16–18].
For example, as well as the routine use of the term “blacklist” to denote a negative list [18], Jeffrey Beall, the founding and leading proponent of a virtual crusade against predatory publishing, has written such posts as:
Hyderabad, India is one of the most corrupt cities on earth, I think. It is home to countless predatory open-access publishers … and new, open-access publishing companies and brands are being created there every day … The tacit rule of thumb of Hyderabad-based businesses is: Use the internet to generate revenue any way you can. There are numerous internet-based businesses in this over-crowded city [18].
It is hard to forget or forgive this blanket labeling of an entire city and its population as corrupt, and his use of the term over-crowded appears only to stoke racist and xenophobic imagery and ideology.
Much of the high-profile criticism that has been leveled at the predatory publishing sector is based on a series of sting operations where authors submitted patently low-quality articles for publication [19–26]. However, it is essential to note that it is not only so-called predatory journals that are routinely fooled. For example, in the infamous Bohannon sting operation [21], in which a series of fake and fatally flawed scientific papers were sent to hundreds of pay-to-publish journals, as well as being accepted by widely acknowledged predatory publishers, the papers were also accepted by journals published by “reputable” publishers such as Sage, Elsevier, and Wolters Kluwer. Equally, the well-known Sokal hoax, in which a physicist published a nonsensical paper (“Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”) in a cultural studies journal, also involved what was widely considered a leading journal in the field [25, 26].
It is therefore crucial to critically evaluate the mainstream academic literature that is generally held in such high regard vis-à-vis predatory publishers. Perhaps the most infamous case that may require a sober reappraisal and leveling of esteem in the field of publishing is evidenced by Elsevier, a global leader in academic publishing. Elsevier Australia was exposed publicly for receiving payments from a pharmaceutical company for publishing six fake journals (Australasian Journal of General Practice, Australasian Journal of Neurology, Australasian Journal of Cardiology, Australasian Journal of Clinical Pharmacy, Australasian Journal of Cardiovascular Medicine, Australasian Journal of Bone & Joint Medicine), all of which advocated the efficacy of the products of the sponsoring pharmaceutical company [27, 28]. As Goldacre notes:
Elsevier Australia went the whole hog, giving Merck an entire publication which resembled an academic journal, although in fact it only contained reprinted articles, or summaries, of other articles. In issue 2, for example, nine of the 29 articles concerned Vioxx, and a dozen of the remainder were about another Merck drug, Fosamax. All of these articles presented positive conclusions. Some were bizarre: such as a review article containing just two references [27].
Although this is perhaps an extreme example, many will agree that the cornerstone of quality control in academic publishing is peer review. Peer review has long been established as the gold standard in academic circles. However, even a cursory examination of the literature on this topic reveals this process as a mirage akin to the emperor's new clothes.
Drummond Rennie, deputy editor of the Journal of the American Medical Association, has stated that “if peer review was a drug it would never be allowed onto the market” [29]. In a damning exposé of peer review, Rennie is quoted:
There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, no conclusion too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print [29].
There is significant evidence that peer review routinely fails to detect significant errors, even in reputable journals [29–37]. Undoubtedly one of the most robust examinations of inadequacies in the academic peer review process can be seen in an alarming examination published by the Journal of the Royal Society of Medicine [38]. This study involved an “insider” research approach in which three articles were deliberately weakened after having been accepted for publication. The articles in question were amended to include nine major errors and five minor errors. These critically weakened articles were then sent out for peer review to over 600 BMJ peer reviewers, with between 418 and 522 reviewers taking part in examining each of the three papers. Disconcertingly, out of the nine major errors introduced, the average number of errors spotted by the peer reviewers ranged from just 2.58 (SD=1.9) to 3.05 (SD=1.8). Of the five minor errors introduced, the average number noted by the reviewers was just 0.85 (SD=0.8) to 1.09 (SD=0.8). It must be acknowledged that some reviewers recommended rejection before identifying all of the errors, and that some reviewers may not have been familiar with the methodology of randomized control trials (RCTs) but might have performed better reviews in assessing other methodologies. Some caution in interpretation may also be required in relation to this paper, as it was restricted to UK reviewers and hence may not be generalizable. However, and perhaps more disturbingly, this study sought to determine if training in peer review would improve the quality and found that there was no significant improvement [38]. Despite such damning evidence, Smith quite correctly states that “when something is peer reviewed it is in some sense blessed” [39]. It must be acknowledged that there is strong evidence to suggest that when reviewers are asked to review a paper, levels of agreement on whether it should be published or not is little better than would be anticipated by chance alone [29, 40, 41].
The so-called predatory publishing field is routinely attacked on the basis of poor quality, the implication being that quality is far higher among established journals operating the traditional publishing models. However, having now raised significant queries over the quality and impact of peer review on traditional journals, it is now opportune to explore a number of other weaknesses in such journals that are also seldom, if ever, linked to the debate around predatory publishers. A host of other critical deficits could be explored within mainstream academic journal articles ranging from publication bias [42–47] to academic plagiarism [48, 49] and routinely poor statistical methods [50–53]. However, for reasons of brevity, this commentary will focus on just two such issues as exemplars to demonstrate the weaknesses that appear inherent in the world of academic publishing that are seldom discussed vis-à-vis predatory publishing: errata and the issue of scientific misconduct and retractions.
Starting with errata, it should be noted that even highly prestigious “traditional” journals also routinely include errors in their publications [54, 55]. A recent examination by Hauptman et al. [56] revealed that almost a quarter (24%) of articles examined in such journals included at least one significant error, which “materially altered data interpretation” [57]. A subsequent examination of five top-ranking journals in the medical field (New England Journal of Medicine, Annals of Internal Medicine, British Medical Journal, Journal of the American Medical Association, and The Lancet) over a twelve-month period identified 314 articles with one or more published errata, an average of 1.3 per issue [57]. Even when errata are published, it has been noted that these can take a significant length of time to appear [57]. In examining this issue, it is perhaps more alarming to note the work of Molckovsky et al., who found that:
33% of oncologists do not read errata, and 45% have read only the abstract when referencing an article. Although 59% of oncologists have noticed errors in cancer publications, only 13% reported the error [58].
Having established the inadequate treatment of errata by publishers of mainstream academic publications, the next section will explore another aspect of publishing that is also seldom mentioned in the predatory publishing debate, that of scientific misconduct and subsequent retractions. This will help to demonstrate the moral panic over predatory publishers is overstated, ignoring the weaknesses of traditional academic publishing.
It must be acknowledged that although some editors may, rather naively, underplay the impact of scientific misconduct in the academic literature, it is a growing issue [59]. It is hard to know if this is indicative of fraud levels rising, or of awareness and policing of the problem developing. However, either way, it is clear that there has been a significant focus on this issue in recent years [60, 61]. A British Medical Journal survey revealed that 13% of respondents reported being aware of fraudulent data manipulation [62], with another similar study putting this figure at 14% [63]. It should also be noted that although fraud-based retractions are not a new phenomenon [64], there is widespread agreement that retractions based on fraud have increased dramatically in recent years [65, 66]. Notably, a review of significant examples of such retractions is published annually in The Scientist [67–70].
Some of the long-term issues noted above in relation to errata, covering what may be termed honest mistakes, are also cause for concern in relation to retractions of fraudulent articles [71–73]. For example, Elia et al. discuss how few articles are fully and properly retracted following exposure [74]. Interestingly, one of the reasons reported by these authors for this deficiency includes publishers not printing the retractions as requested. These authors suggest that “retractions appear to be unpopular with both editors and institutions since they may shed doubt on the integrity of science, and on the expertise of the editorial team” [74]. Once fraudulent or erroneous findings have been published, evidence suggests that despite exposure as false, such work may continue to be cited positively, sometimes for decades [71, 75–79]. Retraction Watch have what they term a Leader Board of such issues [80]. At present, the leader in this list is a 2013 article in the New England Journal of Medicine that was retracted in 2018 [81]. The article has a total of 2,623 Web of Science citations, 712 of which have occurred after the retraction of the paper. Interestingly, the infamous MMR Lancet article, which takes second place, was published in 1998 and has been cited in Web of Science more since its retraction in 2010 than before it (820 versus 643) [82]. It must be acknowledged that some citations may be to refute an article or to discuss aspects other than the results. However, retracted articles continue to be routinely cited; the vast majority are routinely building on the work of an article, and this remains a highly problematic issue [75–78]. It should be noted that although such errors and misconduct are often seen to irretrievably damn new online publishers and journals, no such sector-wide approach is taken to these issues when they appear in more mainstream traditional journals.
This paper is not designed to be an argument in support of a race to the bottom in terms of quality. However, it is designed to reduce the “blessedness” referred to by Smith in relation to established academic journals [39] and to acknowledge that a more nuanced response may be appropriate [8]. The moral panic evident in the academic literature heralding the doom of academic publishing because of the rise of predatory journals is misplaced. Traditional journals routinely demonstrate the poor practices that are held up as damning in predatory publishers. The sensationalism and crusading zeal launched against suspected predatory publishers has had its unintended casualties. Some indication of the collateral damage on OA journals can be seen in Emery and Levine-Clair's article provocatively titled “Our lives as predatory publishers” [15]. Loose attacks on predatory publishers may indirectly threaten OA journals generally, and it is important to remember the vested interests of the established academic publishing oligopoly in fostering such concerns [9]. Considerable literature has emerged designed to help authors identify fraudulent predatory publishers [83, 84]. This will be useful to would-be authors. However, although the provenance of a journal may be important, more crucial is teaching critical evaluation skills. An excellent journal article can easily be published in a predatory journal. Similarly, a weak and dangerous article may be published in a reputable journal [82]. In the tradition of Feyerabend, students, academics, and health service staff should question everything and critically interrogate all information that they are presented with and be prepared to speak out and challenge it [85].
1. Harvey E, Ball CG. Predatory journal publishing: is this an alternate universe? Can J Surg. 2021;64(3):E358. DOI: https://doi.org/10.1503/cjs.009821
.
2. Yeo-Teh NSL, Tang BL. Wilfully submitting to and publishing in predatory journals - a covert form of research
misconduct? Biochem Med (Zagreb). 2021 Oct 15;31(3):030201. DOI: https://doi.org/10.11613/BM.2021.030201
.
3. Begum S, Abdulla R. Predatory science: unravelling a secret journey of fake journals and conferences. J Oral Maxillofac Pathol. 2021 Jan-Apr;25(1):193–94. DOI: https://doi.org/10.4103/jomfp.jomfp_493_20
.
4. Nieminen P, Uribe SE. The quality of statistical reporting and data presentation in predatory dental journals
was lower than in non-predatory journals. Entropy (Basel). 2021 Apr 16;23(4):468. DOI: https://doi.org/10.3390/e23040468
.
5. Macháček V, Srholec M. Predatory publishing in Scopus: evidence on cross-country differences. Scientometrics. 2021 Feb 7:1–25. DOI: https://doi.org/10.1007/s11192-020-03852-4
.
6. Azam Rathore F, Farooq F. Letter to the editor: Citations from predatory journals must be discouraged and how
to identify predatory journals and publishers. Ir J Med Sci. 2021;190: 1645–46. DOI: https://doi.org/10.1007/s11845-020-02463-5
.
7. Richtig G, Berger M, Lange-Asschenfeldt B, Aberer W, Richtig E. Problems and challenges of predatory journals. J Eur Acad Dermatol Venereol. 2018;32(9):1441–49. DOI: https://doi.org/10.1111/jdv.15039
.
8. Bell K. Predatory open access journals as parody: exposing the limitations of ‘legitimate'
academic publishing. TripleC. 2017;15(2):651–62. DOI: https://doi.org/10.31269/triplec.v15i2.870
.
9. Larivière V, Haustein S, Mongeon P. The oligopoly of academic publishers in the digital era. PLOS One. 2015. DOI: https://doi.org/10.1371/journal.pone.0127502
.
10. Hall R. You say you want a publishing revolution. Progressive Librarian. 43:37–48. Available from: http://www.progressivelibrariansguild.org/PL/PL43/035.pdf.
11. Oxford Reference. [cited 22 Mar 2022]. Moral panic. https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100208829.
12. Oermann MH, Nicoll LH, Carter-Templeton H, Woodward A, Kidayi PL, Browning Neal L, Eddie AH, Ashton KS, Chinn PL, Amarasekara S. Citations of articles in predatory nursing journals. Nurs Outlook. 2019;67(6):664–70. DOI: https://doi.org/10.1016/j.outlook.2019.05.001
.
13. Schira HR, Hurst C. Hype or real threat: the extent of predatory journals in student bibliographies. Partnership: The Canadian Journal of Library and Information Practice and Research. 2019;14(1). DOI: https://doi.org/10.21083/partnership.v14i1.4764
.
14. Frandsen TF. Are predatory journals undermining the credibility of science? A bibliometric analysis
of citers. Scientometrics. 2017;113:1513–28. DOI: https://doi.org/10.1007/s11192-017-2520-x
.
15. Emery J, Levine-Clark M. Our lives as predatory publishers. Collaborative Librarianship. 2017;9(4):1. Available from: https://digitalcommons.du.edu/collaborativelibrarianship/vol9/iss4/1.
16. Houghton F. Ethics in academic publishing: a timely reminder. J Med Libr Assoc. 2017;105(3):282–84. DOI: https://doi.org/10.5195/jmla.2017.122
.
17. Houghton F, Houghton S. Predatory publishing: how to safely navigate the waters of open access. Can J Nurs Res. 2018;50(4):167–8. DOI: https://doi.org/10.1177/0844562118777328
.
18. Houghton F, Houghton S. “Blacklists” and “whitelists”: a salutary warning concerning the prevalence of racist
language in discussions of predatory publishing. J Med Libr Assoc. 2018;106(4):527–30. DOI: https://doi.org/10.5195/jmla.2018.490
.
19. Aldhous P. CRAP journal accepted by journal [Internet]. New Sci. 11 Jun 2009. <http://www.newscientist.com/article/dn17288-crap-paper-accepted-by-journal.html#.Up89d-JDdrw>.
20. Eldredge N. Mathgen paper accepted! [Internet]. 14 Sept 2012. <http://thatsmathematics.com/blog/archives/102>.
21. Bohannon J. Who's afraid of peer review? Sci. 2013;342(6154):60–65. DOI: https://doi.org/10.1126/science.342.6154.60
.
22. Stromberg J. A reporter published a fake study to expose how terrible some scientific journals are [Internet]. Vox. 24 Apr 2014. <https://www.vox.com/2014/4/24/5647106/a-reporter-published-a-fake-study-to-expose-how-terrible-some>.
23. Flaherty C. Journal accepts profanity-laden joke paper [Internet]. Inside Higher Ed. 21 Nov 2014. <https://www.insidehighered.com/quicktakes/2014/11/21/journal-accepts-profanity-laden-joke-paper>.
24. Neuroskeptic. Predatory journals hit by 'Star Wars' sting [Internet]. Discover Magazine. 22 Jul 2017. <https://www.discovermagazine.com/mind/predatory-journals-hit-by-star-wars-sting>.
25. Sokal AD. Transgressing the boundaries: toward a transformative hermeneutics of quantum gravity. Social Text. 1996;46–47:217–52. DOI: https://doi.org/10.2307/466856
.
26. The Sokal hoax: a forum [Internet]. Lingua Franca. July/August 1996. <http://linguafranca.mirror.theinfo.org/9607/tsh.html>.
27. Goldacre B. The danger of drugs … and data [Internet]. The Guardian. 9 May 2009. <http://www.theguardian.com/commentisfree/2009/may/09/bad-science-medical-journals-companies>.
28. Elsevier. Statement From Michael Hansen, CEO Of Elsevier's Health Sciences Division, regarding Australia based sponsored journal practices between 2000 and 2005 [Internet]. 7 May 2009. <https://www.elsevier.com/about/press-releases/clinical-solutions/statement-from-michael-hansen,-ceo-of-elseviers-health-sciences-division,-regarding-australia-based-sponsored-journal-practices-between-2000-and-2005>.
29. Smith R. Classical peer review: an empty gun. Breast Cancer Res. 2010;12(Suppl 4):S13. DOI: https://doi.org/10.1186/bcr2742
.
30. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign
their reports: a randomized controlled trial. JAMA. 1998 Jul 15;280(3):237–40. DOI: https://doi.org/10.1001/jama.280.3.237
. PMID: 9676667.
31. Stossel TP. Reviewer status and review quality. Experience of the Journal of Clinical Investigation. NEJM 1985;312:658–59.
32. Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, Tierney WM. Editorial peer reviewers' recommendations at a general medical journal: are they reliable
and do editors care? PLoS One. 2010;5(4):e10072. DOI: https://doi.org/10.1371/journal.pone.0010072
.
33. Mahoney MJ. Publication prejudices: an experimental study of confirmatory bias in the peer review
system. Cogn Ther Res. 1977;1(2):161–75. https://doi.org/10.1007/BF01173636
.
34. Evans AT, McNutt RA, Fletcher SW, Fletcher RH. The characteristics of peer reviewers who produce good-quality reviews. J Gen Intern Med. 1993;8:422–8.
35. Herron DM. Is expert peer review obsolete? A model suggests that post-publication reader review
may exceed the accuracy of traditional peer review. Surg Endosc. 2012;26(8):2275–80. DOI: https://doi.org/10.1007/s00464-012-2171-1
.
36. Peters DP, Ceci SJ: Peer-review practices of psychological journals: the fate of published articles submitted
again. Behav Brain Sci. 1982;5(02):187–95. DOI: https://doi.org/10.1017/S0140525X00011183
.
37. Ross-Hellauer T. What is open peer review? A systematic review. F1000Res. 2017;6:588. DOI: https://doi.org/10.12688/f1000research.11369.2
.
38. Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect
them? J R Soc Med. 2008;101:507–14. DOI: https://doi.org/10.1258/jrsm.2008.080062
.
39. Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99:178–82. DOI: https://doi.org/10.1258/jrsm.99.4.178
.
40. Lock S. A difficult balance: editorial peer review in medicine. London: Nuffield Provincials Hospital Trust; 1985.
41. Rothwell PM, Martyn C. Reproducibility of peer review in clinical neuroscience – is agreement between reviewers
any greater than would be expected by chance alone? Brain. 2000;123:1964–69. DOI: https://doi.org/10.1093/brain/123.9.1964
.
42. Easterbrook PJ, Gopalan R, Berlin JA, Matthews DR. Publication bias in clinical research. Lancet. 1991;337(8746):867–72. DOI: https://doi.org/10.1016/0140-6736(91)90201-y
.
43. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ. 2004;328(7441):673–70. DOI: https://doi.org/10.1136/bmj.38023.700775.AE
.
44. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. BMJ. 2000;320(7249):1574–77. DOI: https://doi.org/10.1136/bmj.320.7249.1574
.
45. Ayorinde AA, Williams I, Mannion R, Song F, Skrybant M, Lilford RJ, Chen YF. Assessment of publication bias and outcome reporting bias in systematic reviews of
health services and delivery research: a meta-epidemiological study. PLoS ONE. 2020;15(1):1–17. DOI: https://doi.org/10.1371/journal.pone.0227580
.
46. Ayorinde AA, Williams I, Mannion R, Song F, Skrybant M, Lilford RJ, Chen YF. Publication and related biases in health services research: a systematic review of
empirical evidence. BMC Medical Research Methodology. 2020;20(1):1–12. DOI: https://doi.org/10.1186/s12874-020-01010-1
.
47. Marks-Anglin A, Chen Y. A historical review of publication bias. Research Synthesis Methods. 2020;11(6):725–42. DOI: https://doi.org/10.1002/jrsm.1452
.
48. Ghose T. Top Science Scandals of 2011 [Internet]. The Sci; 2011. <http://www.the-scientist.com/?articles.view/articleNo/31519/title/Top-Science-Scandals-of-2011/>.
49. Zielinska E. (2012) Plagiarism is almost always a symptom of other educational problems [Internet]. 2012. http://plagiarism-main.blogspot.com/2012/?m=0.
50. Ioannidis JPA. Why most published research findings are false. PLoS Medicine. 2005;2(8):e124. DOI: https://doi.org/10.1371/journal.pmed.0020124
.
51. Harris A, Reeder R, Hyun J. Survey of editors and reviewers of high-impact psychology journals: statistical and
research design problems in submitted manuscripts. J Psychol. 2011;145(3):195–209. DOI: https://doi.org/10.1080/00223980.2011.555431
.
52. Reinhart A. Statistics done wrong: the woefully complete guide. San Francisco, California: No Starch Press; 2015.
53. Al-Hoorie AH, Vitta JP. The seven sins of L2 research: a review of 30 journals' statistical quality and their
CiteScore, SJR, SNIP, JCR Impact Factors. Language Teaching Research. 2019;23(6):727–44. DOI: https://doi.org/10.1177/1362168818767191
.
54. Farrah K, Rabb D. Errata for trial publications are not uncommon, are frequently not trivial, and can
be challenging to access: a retrospective review. J Med Libr Assoc. 2019;107(2):187–93. DOI: https://doi.org/10.5195/jmla.2019.629
.
55. Erfanmanesh M, Morovati M. Published errors and errata in library and information science journals. Collection & Curation. 2019;38(3):61–67. DOI: https://doi.org/10.1108/CC-12-2018-0024
.
56. Hauptman P, Armbrecht ES, Chibnall JT, Guild C, Timm JP, Rich MW. Errata in medical publications. Am J Med. 2014;127(8):779–85. DOI: https://doi.org/10.1016/j.amjmed.2014.03.012
.
57. Bhatt VR, Aryal MR, Panta S, Mosalpuria K, Armitage JO. A retrospective analysis of reported errata in five leading medical journals in 2012. J Community Hosp Intern Med Perspect. 2014;4(5):25738. DOI: https://doi.org/10.3402/jchimp.v4.25738
.
58. Molckovsky A, Vickers MM, Tang PA. Characterization of published errors in high-impact oncology journals. Curr Oncol. 2011;18(1):26–32. DOI: https://doi.org/10.3747/co.v18i1.707
.
59. DeMaria AN. Scientific misconduct, retractions, and errata. J Am Coll Cardiol. 2012;59(16):1488–89. DOI: https://doi.org/10.1016/j.jacc.2012.03.005
.
60. Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: a review of the literature on retracted journal
articles. Current Sociology Review. 2017;65(6):814–45. DOI: https://doi.org/10.1177/0011392116663807
.
61. Zhang Y. Chinese journal finds 31% of submissions plagiarized. Nature. 2010;467:153. DOI: https://doi.org/10.1038/467153d
.
62. Tavare A. Scientific misconduct is worryingly prevalent in the UK, shows BMJ survey. BMJ. 2012;344:e377. DOI: https://doi.org/10.1136/bmj.e377
.
63. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis
of survey data. PLoS One. 2009;29:e5738. DOI: https://doi.org/10.1371/journal.pone.0005738
.
64. Budd JM, Sievert M, Schultz TR. Phenomena of retraction: reasons for retraction and citations to the publications. JAMA. 1998;280(3):296–97. DOI: https://doi.org/10.1001/jama.280.3.296
.
65. Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences. 2012;109(42): 17028–33. DOI: https://doi.org/10.1073/pnas.1212247109
.
66. He T. Retraction of global scientific publications from 2001 to 2010. Scientometrics. 2013;96(2):555–61. DOI: https://doi.org/10.1007/s11192-012-0906-3
.
67. Evans K, Houghton F. Retractions: an annual update on ethical misconduct in research & publishing. MLA News. March/April 2017.
68. Marcus A, Oransky Y. The top 10 retractions of 2014 [Internet]. The Scientist. 23 Dec 2014. <http://www.the-scientist.com/?articles.view/articleNo/41777/title/The-Top-10-Retractions-of-2014/>.
69. Retraction Watch. Top 10 retractions of 2016 [Internet]. The Scientist. 21 Dec 2016. <http://www.the-scientist.com/?articles.view/articleNo/47813/title/Top-10-Retractions-of-2016/>.
70. Retraction Watch. Top 10 retractions of 2015 [Internet]. The Scientist. 23 Dec 2015. <http://www.the-scientist.com/?articles.view/articleNo/44895/title/The-Top-10-Retractions-of-2015>.
71. Davis PM. The persistence of error: a study of retracted articles on the internet and in personal
libraries. J Med Libr Assoc. 2012;100(3):184–89. DOI: https://doi.org/10.3163/1536-5050.100.3.008
.
72. Decullier E, Huot L, Samson G. Visibility of retractions: a cross-sectional one-year study. BMC Research Notes. 2013;6:238. DOI: https://doi.org/10.1186/1756-0500-6-238
.
73. Decullier E, Huot L, Maisonneuve H. What time-lag for a retraction search on PubMed? BMC Research Notes. 2014;7:395. DOI: https://doi.org/10.1186/1756-0500-7-395
.
74. Elia N, Wager E, Tramèr MR. Fate of articles that warranted retraction due to ethical concerns: a descriptive
cross-sectional study. PLoS One. 2014;9(1):e85846. DOI: https://doi.org/10.1371/journal.pone.0085846
.
75. Korpela K. How long does it take for the scientific literature to purge itself of fraudulent
material? The Breuning case revisited. Current Medical Research and Opinion. 2010;26(4):843–47. DOI: https://doi.org/10.1185/03007991003603804
.
76. Bolboacă SD, Buhai DV, Aluaş M, Bulboacă AE. Post retraction citations among manuscripts reporting a radiology-imaging diagnostic
method. PLoS One. 2019;14(6):e0217918. DOI: https://doi.org/10.1371/journal.pone.0217918
.
77. Peterson GM. The effectiveness of the practice of correction and republication in the biomedical
literature. J Med Libr Assoc. 2010;98(2):135–39. DOI: https://doi.org/10.3163/1536-5050.98
.
78. Budd JM, Sievert M, Schultz TR, Scoville C. Effects of article retraction on citation and practice in medicine. Bull Med Libr Assoc. 1999;87(4):437–43.
79. Teixeira da Silva JA. It may be easier to publish than correct or retract faulty biomedical literature. Croat Med J. 2017;58(1):75–79. DOI: https://doi.org/10.3325/cmj.2017.58.75
.
80. Retraction Watch. Top 10 most highly cited retracted papers [Internet]. <https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/>.
81. Estruch R, Ros E, Salas-Salvado J, Covas MI, Corella D, Arós F, Gómez-Gracia E, Ruiz-Gutiérrez V, Fiol M, Lapetra J, Lamuela-Raventos RM, Serra-Majem L. Primary prevention of cardiovascular disease with a mediterranean diet. N Engl J Med. 2013;368:1279–90. DOI: https://doi.org/10.1056/NEJMoa1200303
.
82. Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, Berelowitz M, Dhillon AP, Thomson MA, Harvey P, Valentine A, Davies SE, Walker-Smith JA. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental
disorder in children. Lancet. 1998;351:637–41. DOI: https://doi.org/10.1016/s0140-6736(97)11096-0
.
83. Hansoti B, Langdorf MI, Murphy LS. Discriminating between legitimate and predatory open access journals: report from
the International Federation for Emergency Medicine Research Committee. West J Emerg Med. 2016;17(5):497–507. DOI: https://doi.org/10.5811/westjem.2016.7.30328
.
84. Shamseer L, Moher D, Maduekwe O, Turner L, Barbour V, Burch R, Clark J, Galipeau J, Roberts J, Shea BJ. Potential predatory and legitimate biomedical journals: can you tell the difference?
A cross-sectional comparison. BMC Med. 2017;15(1):28. DOI: https://doi.org/10.1186/s12916-017-0785-9
.
85. Feyerabend P. Against method. New York: Verso Books; 2010.
Frank Houghton, 1 frank.houghton@lit.ie, Director of Social Sciences ConneXions, Technological University of the Shannon, Limerick, Ireland
Copyright © 2022 Frank Houghton
This work is licensed under a Creative Commons Attribution 4.0 International License.
Journal of the Medical Library Association, VOLUME 110, NUMBER 2, April 2022