Lorraine C. Toews, MLIS
doi: http://dx.doi.org/10.5195/jmla.2017.246
Received 01 July 2016: Accepted 01 December 2016
ABSTRACT
Objective
Complete, accurate reporting of systematic reviews facilitates assessment of how well reviews have been conducted. The primary objective of this study was to examine compliance of systematic reviews in veterinary journals with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for literature search reporting and to examine the completeness, bias, and reproducibility of the searches in these reviews from what was reported. The second objective was to examine reporting of the credentials and contributions of those involved in the search process.
Methods
A sample of systematic reviews or meta-analyses published in veterinary journals between 2011 and 2015 was obtained by searching PubMed. Reporting in the full text of each review was checked against certain PRISMA checklist items.
Results
Over one-third of reviews (37%) did not search the CAB Abstracts database, and 9% of reviews searched only 1 database. Over two-thirds of reviews (65%) did not report any search for grey literature or stated that they excluded grey literature. The majority of reviews (95%) did not report a reproducible search strategy.
Conclusions
Most reviews had significant deficiencies in reporting the search process that raise questions about how these searches were conducted and ultimately cast serious doubts on the validity and reliability of reviews based on a potentially biased and incomplete body of literature. These deficiencies also highlight the need for veterinary journal editors and publishers to be more rigorous in requiring adherence to PRISMA guidelines and to encourage veterinary researchers to include librarians or information specialists on systematic review teams to improve the quality and reporting of searches.
Systematic reviews are a type of research synthesis that uses a clearly formulated question and systematic, explicit, and reproducible methods to identify, select, and critically appraise all relevant published and unpublished studies and to collect and analyze data from the studies included in the review. Since an individual biomedical study cannot provide definitive evidence, well-conducted systematic reviews are a powerful and reliable form of evidence because they contextualize and integrate individual studies within the full body of available research on a topic [1].
Because the literature search is the data collection method in a systematic review and the search results form the evidence base of the review, it is critical that the search is reported in sufficient detail so that it can be replicated. Unfortunately, incomplete and inaccurate reporting of systematic review literature searches [2, 3] is part of a larger, widespread problem of poor reporting of all types of biomedical research [4–6]. Poor reporting of biomedical research has serious consequences, making it difficult or impossible to assess the quality of, replicate the study, or use the study in clinical decision making or in subsequent systematic reviews [7]. To address this problem and improve the reproducibility of published research, biomedical journal editors and research funding bodies have supported the development of research reporting guidelines that specify a baseline of information required for a complete and transparent account of the conduct and findings of research studies [8]. The reporting guideline for systematic reviews and meta-analyses is Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), published in 2009 [4].
While reporting a systematic review and actually conducting the review are distinct processes, they are closely interrelated. Complete, accurate reporting helps determine how well a systematic review was conducted, whereas incomplete, inaccurate reporting raises questions about the conduct, quality, and reliability of a systematic review [4, 7, 9, 10]. A recent study found that librarian participation in internal medicine systematic reviews as coauthors who developed, conducted, and reported the search methodology was associated with higher quality reporting of searches and better search reproducibility [11]. By contrast, the quality of reporting of literature searches has not been evaluated for veterinary medicine systematic reviews, and the participation of librarians as coauthors of veterinary systematic reviews appears to be much less common.
The first objective of this study was to address this gap in the literature by examining the compliance of literature searches in veterinary systematic reviews with PRISMA reporting guidelines and examining what could be determined about the completeness, bias, and reproducibility of the reported searches. The second objective of this study was to examine reporting of the credentials and contributions of those involved in the reviews. The findings of this study have implications for the role that librarians can play in supporting the production of veterinary systematic reviews. Whereas a previous study found that librarians have played an important role in improving the quality of systematic reviews in the human medicine literature [11], librarians have a significant opportunity to play a comparable role in veterinary systematic reviews by using their expertise to ensure the quality, reproducibility, and strong reporting of search methodologies.
A sample of systematic reviews in veterinary journals was obtained by searching PubMed for titles from Ugaz’s “Basic List of Veterinary Medical Serials, Third Edition” [12], and limiting search results to studies published between 2011 and 2015. Since PubMed does not have a publication type that indexes all types of systematic reviews, search results were limited to those tagged with the PubMed publication type meta-analysis or to studies with the phrase “systematic review” in the title or abstract of the PubMed record. The abstracts of these records were reviewed, and only records whose authors explicitly identified their studies as systematic reviews were included in this study. Authors’ claims that a study was a systematic review were accepted at face value, and no further analysis of these records was done. From these results, reviews from journals in 12 veterinary medicine specialty areas were selected using the subject categories listed in the appendix of the Ugaz study [12], resulting in a final sample of 75 systematic reviews (detailed in the supplemental appendix). The sample size was not predetermined; rather, the goal was to include a variety of veterinary specialty areas. Reporting in the full-text of each review in the final sample was checked against PRISMA checklist search methods items 7 and 8 [4].
All of the reviews reported a list of the databases that reviewers searched (detailed in the supplemental appendix). As shown in Table 1, the average number of databases searched per review was 4, but 9% of reviews searched only 1 database. Although PubMed and CAB Abstracts were the most common databases searched, over one-third (37%) of reviews did not search CAB Abstracts. One-quarter (24%) of reviews used the Google Scholar search engine, and 5 of these reviews searched only 1 other database in addition to Google Scholar. Twelve (16%) reviews reported that they searched both PubMed and MEDLINE, but only 5 of these reviews noted which vendor search platform they used in searching MEDLINE.
Table 1 Literature search reporting characteristics, systematic reviews sample (n=75)
n | (%) | |
Databases searched | ||
PubMed | 60 | (80%) |
CAB Abstracts | 47 | (63%) |
MEDLINE | 20 | (27%) |
Web of Science | 19 | (25%) |
Google Scholar | 18 | (24%) |
Agricola | 17 | (23%) |
Scopus | 16 | (21%) |
Number of databases searched | ||
1 database | 7 | (9%) |
2 databases | 16 | (21%) |
3 databases | 19 | (25%) |
4 databases | 11 | (15%) |
5 or more databases | 22 | (29%) |
Average number of databases searched=4 | ||
Grey literature | ||
No grey literature search reported | 45 | (60%) |
Unspecified grey literature searched | 9 | (12%) |
Conference proceedings, named | 7 | (9%) |
Conference proceedings, not named | 5 | (7%) |
Theses, dissertation | 5 | (7%) |
Government agency reports | 4 | (5%) |
Excluded grey literature | 4 | (5%) |
Search strategy | ||
No search strategy reported | 8 | (11%) |
Search terms listed | 63 | (84%) |
Full electronic database line-by-line strategy | 4 | (5%) |
Credentials of searcher | ||
Not reported | 51 | (68%) |
Affiliated with a library, but details not specified | 4 | (5%) |
Author, not identified as librarian or information specialist | 14 | (19%) |
Librarian or information specialist | 6 | (8%) |
Contributions to search process | ||
Not reported | 45 | (60%) |
Assisted, but role not specified | 13 | (17%) |
Conference papers and unpublished clinical trials are important types of grey literature in veterinary systematic reviews. Only 16% of reviews in this study reported searching for conference proceedings. Over half of the reviews (60%) did not report any search for grey literature, and a further 5% of reviews reported that they chose to exclude grey literature.
Because 11% of reviews in this study did not report any search strategy and 84% of reviews reported search terms but not sufficient other details necessary to replicate the search, fully 95% of reviews did not report a reproducible search strategy.
The majority of reviews did not report either the credentials of the persons who planned and conducted the literature search for the review (73%) or the specific contributions made by persons who were involved in the search process (77%).
PRISMA checklist item 7 states that systematic reviews should “describe all information sources (such as databases…) in the search” [4]. Systematic reviews require a comprehensive search for published and unpublished studies in order to minimize bias; therefore, for most review topics, it is necessary to search multiple research databases to ensure that all of the relevant literature is retrieved [2, 13–15]. The appropriate databases to search will vary with the topic of the review, but failure to search multiple databases can increase the risk that relevant studies will be missed, which could bias the outcome of the review [16, 17]. As 9% of the reviews in this study sample searched only 1 database, readers of these studies cannot have confidence that these reviews retrieved all the relevant literature. Since CAB Abstracts indexes the veterinary medicine journal literature more comprehensively than any other research database [18], the 37% of reviews that did not search CAB Abstracts might have missed relevant research that could affect the outcome of the review. One-quarter of the reviews searched Google Scholar, and 5 of these reviews searched only 1 other database in addition to Google Scholar. Several studies have identified problems with using Google Scholar for systematic review searching. Unlike curated databases such as PubMed and CAB Abstracts, Google Scholar does not publish lists of its journal and grey literature source content, so it is impossible to determine what proportion of the biomedical literature it covers. Also, Google Scholar does not publish its search algorithms [19], its search algorithms change without notice, and it displays only the first 1,000 hits of a search, so reproducibility of search strategies with consistent results is not possible [20]. As a result, the completeness and bias of Google Scholar searches cannot be evaluated in the way that it can be for traditional curated databases such PubMed and CAB Abstracts.
The main reason for systematic reviews to conduct a search of the grey literature is to counteract publication bias [21]. Publication bias is a longstanding pattern in the biomedical literature where studies with positive, statistically significant results are more likely to be published than studies with negative or statistically insignificant results. Because of this bias, systematic reviews based only on published studies can overestimate the effectiveness of an intervention [22]. In the medical literature, McAuley et al. found that excluding grey literature from systematic reviews resulted in an overestimate of the effect of the intervention by an average of 12% [23], and Hopewell et al. found a 9% exaggeration of intervention effects [24]. It is not clear whether comparable patterns of overestimation of intervention effects are present in veterinary medicine systematic reviews, but this possibility exists, particularly since a much lower percentage of veterinary conference abstracts are later published in the journal literature than is the case in human medicine [25]. A study of swine and bovine vaccine trial conference papers found that only 5.6% of swine trial conference abstracts and 9.2% of bovine trial conference abstracts were ultimately published in the journal literature [26]. By contrast, a systematic review of conference abstract publication rates in human medicine found that 63% of abstracts of clinical trials presented at conferences were later published as journal articles, and subsequent publication was associated with positive results [25]. It is not clear whether subsequent publication of veterinary trial conference abstracts is associated with positive results [26]. What is clear, however, is that a large body of unpublished veterinary studies with the potential to affect the outcome of the review were excluded from 65% of the systematic reviews in this study, raising serious concerns about bias and potential overestimation of intervention effect in these reviews.
PRISMA checklist item 8 states that systematic reviews should “present [the] full electronic search strategy for at least one database, including any limits used, such that it could be repeated” [4]. Reproducibility is an essential characteristic of all reliable primary biomedical research as well as research synthesis reports such as systematic reviews. Because the studies retrieved by the literature search form the evidence base for a systematic review, it is essential that the search strategy, which is the data collection method for reviews, be reported in sufficient detail to enable the search to be replicated and evaluated [2, 27]. A list of search terms is not sufficient to enable accurate replication of a literature search. Rather, only the exact line-by-line electronic database search strategy contains the level of detail required to replicate the search to evaluate its completeness and detect search errors. The full line-by-line electronic database search history reveals strategies that have a significant impact on both the completeness of the search and the relevance of those records to the review topic, including whether both text words and subject headings were used [2, 13, 16], whether subject headings were exploded [2, 13], and which database record fields were searched. It also reveals search errors such as incorrect use of Boolean and proximity operators [2], incorrect search set combinations, and spelling and truncation errors [28]. Because the majority of reviews (95%) did not report a reproducible search strategy, readers of these reviews would not be able to evaluate the searches for completeness, bias, or search errors.
Although reporting the credentials of the searchers and their contributions to the search is not required by the PRISMA guidelines, the International Committee of Medical Journal Editors strongly encourages biomedical journal editors to require reporting of the contributions made by each person involved in conducting a study [29]. The majority of reviews did not report either the credentials of the persons who planned and conducted the literature search for the review (73%) or the specific contributions made by persons involved in the search process (77%). Because the literature search is the data acquisition method in a systematic review, it would be beneficial for readers of a review to know both the credentials and specific contributions of the persons who were involved in developing and implementing the search methodology.
The Cochrane Handbook, which is widely regarded as the gold standard for systematic review search methodology, recommends that systematic review authors seek guidance from a health care librarian or information specialist in planning and conducting the search [13]. Health sciences librarians have training as expert searchers and skills in forming review questions, selecting search terms, selecting databases, crafting high recall searches, managing references, and reporting search methodology to the systematic review process [30–32]. Studies show that systematic reviews in which a librarian or information specialist was a coauthor had literature searches that were more comprehensive [15], had fewer substantive errors [33], were better reported [34], and were more likely to be reproducible [15, 34]. Studies also show that having a second, independent librarian or information specialist peer review the search strategy using a validated tool such as the Peer Review of Electronic Search Strategies (PRESS) guideline “can improve the quality and comprehensiveness of the search and reduce errors” [35] and ultimately improve the overall quality of the review [36].
This study reviewed a relatively small number of systematic reviews from twenty-four veterinary journals, and the reviews in the sample were not randomly selected, so results might not be generalizable. To obtain more definitive results, further research using a larger, randomly selected sample is needed. A majority of reviews in this study had significant deficiencies in reporting the literature search in accordance with PRISMA guidelines. These deficiencies raise questions about how these review literature searches were conducted and ultimately raise serious doubts about the validity and reliability of reviews that are based on a potentially biased and incomplete body of literature. These deficiencies in reporting also highlight the need for veterinary journal editors and publishers to be more rigorous in requiring adherence to the PRISMA guideline and to encourage veterinary researchers to include librarians or information specialists on systematic review teams to improve the quality of review searches as well as the reporting of those searches [26, 34, 37].
Appendix Final sample of seventy-five systematic reviews
1 Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997 Mar 1;126(5):376–80.
2 Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Acad Med. 2011 Aug;86(8):1049–54. DOI: http://dx.doi.org/10.1097/ACM.0b013e31822221e7.
3 Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. J Med Libr Assoc. 2009 Jan;97(1):21–9. DOI: http://dx.doi.org/10.3163/1536-5050.97.1.004.
4 Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009 Aug 18;151(4):264–9.
5 Sargeant JM, Thompson A, Valcour J, Elgie R, Saint-Onge J, Marcynuk P, Snedeker K. Quality of reporting of clinical trials of dogs and cats and associations with treatment effects. J Vet Intern Med. 2010 Jan/Feb;24(1):44–50. DOI: http://dx.doi.org/10.1111/j.1939-1676.2009.0386.x.
6 Sargeant JM, Elgie R, Valcour J, Saint-Onge J, Thompson A, Marcynuk P, Snedeker K. Methodological quality and completeness of reporting in clinical trials conducted in livestock species. Prev Vet Med. 2009 Oct 1;91(2–4):107–15. DOI: http://dx.doi.org/10.1016/j.prevetmed.2009.06.002.
7 Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010 Apr 26;8:24. DOI: http://dx.doi.org/10.1186/1741-7015-8-24.
8 Simera I, Altman DG. Writing a research article that is “fit for purpose”: EQUATOR Network and reporting guidelines. Evid Based Med. 2009 Oct;14(5):132–4. DOI: http://dx.doi.org/10.1136/ebm.14.5.132.
9 O’Connor AM. Improving the quality of reviews in veterinary science: the author’s responsibility. Vet J. 2012 May;192(2):133–4. DOI: http://dx.doi.org/10.1016/j.tvjl.2011.10.014.
10 Simera I, Kirtley S, Altman DG. Reporting clinical research: guidance to encourage accurate and transparent research reporting. Maturitas. 2012 May;72(1):84–7. DOI: http://dx.doi.org/10.1016/j.maturitas.2012.02.012.
11 Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015 Jun;68(6):617–26. DOI: http://dx.doi.org/10.1016/j.jclinepi.2014.11.025.
12 Ugaz AG, Boyd CT, Croft VF, Carrigan EE, Anderson KM. Basic list of veterinary medical serials, third edition: using a decision matrix to update the core list of veterinary journals. J Med Libr Assoc. 2010 Oct;98(4):282–92. DOI: http://dx.doi.org/10.3163/1536-5050.98.4.004.
13 Higgins JT, Green S. Cochrane handbook for systematic reviews of interventions [Internet]. Version 5.1.0. Cochrane Collaboration; Mar 2011 [cited 22 Nov 2016]. <http://handbook.cochrane.org>.
14 Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? empirical study. Health Technol Assess. 2003;7(1):1–76.
15 Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008 May;61(5):440–8. DOI: http://dx.doi.org/10.1016/j.jclinepi.2007.06.005.
16 Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol. 2011 Nov;64(11):1168–77. DOI: http://dx.doi.org/10.1016/j.jclinepi.2010.11.022.
17 Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, St John PD, Viola R, Raina P. Should meta-analysts search Embase in addition to MEDLINE? J Clin Epidemiol. 2003 Oct;56(10):943–55.
18 Grindlay DJC, Brennan ML, Dean RS. Searching the veterinary literature: a comparison of the coverage of veterinary journals by nine bibliographic databases. J Vet Med Educ. 2012 Winter;39(4):404–12. DOI: http://dx.doi.org/10.3138/jvme.1111.109R.
19 Bohannon J. Google Scholar wins raves—but can it be trusted? [scientific publishing]. Science. 2014 Jan 3;343(6166):14. DOI: http://dx.doi.org/10.1126/science.343.6166.14.
20 Bramer WM. Variation in number of hits for complex searches in Google Scholar. J Med Libr Assoc. 2016 Apr;104(2):143–5. DOI: http://dx.doi.org/10.3163/1536-5050.104.2.009.
21 Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematic reviews. BMJ. 1994 Nov 12;309(6964):1286–91.
22 Dickersin K, Min YI. Publication bias: the problem that won’t go away. Ann N Y Acad Sci. 1993 Dec 31;703:135–46; discussion 146–8.
23 McAuley L, Pham B, Tugwell P, Moher D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 2000 Oct 7;356(9237):1228–31.
24 Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000010.
25 Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000005.
26 Brace S, Taylor D, O’Connor AM. The quality of reporting and publication status of vaccines trials presented at veterinary conferences from 1988 to 2003. Vaccine. 2010 Jul 19;28(32):5306–14.
27 Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008 Aug;61(8):748–54. DOI: http://dx.doi.org/10.1016/j.jclinepi.2007.10.009.
28 Sampson M, McGowan J. Errors in search strategies were identified by type and frequency. J Clin Epidemiol. 2006 Oct;59(10):1057–63.
29 International Committee of Medical Journal Editors. Defining the role of authors and contributors [Internet]. The Committee; 2015 [cited 23 Nov 2016]. <http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html>.
30 Harris MR. The librarian’s roles in the systematic review process: a case study. J Med Libr Assoc. 2005 Jan;93(1):81–7.
31 McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005 Jan;93(1):74–80.
32 Rethlefsen ML, Murad MH, Livingston EH. Engaging medical librarians to improve the quality of review articles. JAMA. 2014 Sep 10;312(10):999–1000. DOI: http://dx.doi.org/10.1001/jama.2014.9263.
33 Zhang L, Sampson M, McGowan K. Reporting of the role of the expert searcher in Cochrane Reviews. Evidence Based Libr Inf Pract 2006 1:44.
34 Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015 Jun;68(6):617–26. DOI: http://dx.doi.org/10.1016/j.jclinepi.2014.11.025.
35 McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 guideline statement. J Clin Epidemiol. 2016 Jul;75:40–6. DOI: http://dx.doi.org/10.1016/j.jclinepi.2016.01.021.
36 Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009 Sep;62(9):944–52. DOI: http://dx.doi.org/10.1016/j.jclinepi.2008.10.012.
37 Grindlay D. Reporting guidelines: how can they be implemented by veterinary journals? Equine Vet J. 2015 Mar;47(2):133–4. DOI: http://dx.doi.org/10.1111/evj.12395.
Lorraine C. Toews, MLIS, ltoews@ucalgary.ca, Librarian, Veterinary Medicine and Bachelor of Health Sciences, Health Sciences Library, 3330 Hospital Drive Northwest, University of Calgary, Calgary, AB T2N 4N1, Canada, and Adjunct Associate Librarian, Department of Ecosystem and Public Health, Faculty of Veterinary Medicine, 3280 Hospital Drive Northwest, University of Calgary, Calgary, AB T2N 4Z6, Canada
This article has been approved for the Medical Library Association’s Independent Reading Program <http://www.mlanet.org/page/independent-reading-program>. ( Return to Text )
Articles in this journal are licensed under a Creative Commons Attribution 4.0 International License.
This journal is published by the University Library System of the University of Pittsburgh as part of its D-Scribe Digital Publishing Program and is cosponsored by the University of Pittsburgh Press.
Journal of the Medical Library Association, VOLUME 105, NUMBER 3, July 2017