Advancing PubMed? A comparison of third-party PubMed/Medline tools (original) (raw)
Related papers
Research Synthesis Methods, 2019
Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses), because the sample selection of relevant studies determines a review's outcome, validity, and explanatory power. Yet, the search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which search systems are most appropriate for evidence synthesis and why. Advice on which search engines and bibliographic databases to choose for systematic searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic search qualities of 28 widely used academic search systems, including Google Scholar, PubMed and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which search systems can effectively and efficiently perform (Boolean) searches with regards to precision, recall and reproducibility. We found substantial differences in the performance of search systems, meaning that their usability in systematic searches varies. Indeed, only half of the search systems analysed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal search system. We call for database owners to recognise the requirements of evidence synthesis, and for academic journals to reassess quality requirements for systematic reviews. Our findings aim to support researchers in conducting better searches for better evidence synthesis.
Development of an efficient search filter to retrieve systematic reviews from PubMed
Journal of the Medical Library Association, 2021
Objective: Locating systematic reviews is essential for clinicians and researchers when creating or updating reviews and for decision-making in health care. This study aimed to develop a search filter for retrieving systematic reviews that improves upon the performance of the PubMed systematic review search filter. Methods: Search terms were identified from abstracts of reviews published in Cochrane Database of Systematic Reviews and the titles of articles indexed as systematic reviews in PubMed. Both the precision of the candidate terms and the number of systematic reviews retrieved from PubMed were evaluated after excluding the subset of articles retrieved by the PubMed systematic review filter. Terms that achieved a precision greater than 70% and relevant publication types indexed with MeSH terms were included in the filter search strategy. Results: The search strategy used in our filter added specific terms not included in PubMed's systematic review filter and achieved a 61.3% increase in the number of retrieved articles that are potential systematic reviews. Moreover, it achieved an average precision that is likely greater than 80%. Conclusions: The developed search filter will enable users to identify more systematic reviews from PubMed than the PubMed systematic review filter with high precision.
Health Information Science and Systems, 2014
Individuals and groups who write systematic reviews and meta-analyses in evidence-based medicine regularly carry out literature searches across multiple search engines linked to different bibliographic databases, and thus have an urgent need for a suitable metasearch engine to save time spent on repeated searches and to remove duplicate publications from initial consideration. Unlike general users who generally carry out searches to find a few highly relevant (or highly recent) articles, systematic reviewers seek to obtain a comprehensive set of articles on a given topic, satisfying specific criteria. This creates special requirements and challenges for metasearch engine design and implementation.
Systematic Reviews, 2016
Background: Previously, we reported on the low recall of Google Scholar (GS) for systematic review (SR) searching. Here, we test our conclusions further in a prospective study by comparing the coverage, recall, and precision of SR search strategies previously performed in Embase, MEDLINE, and GS. Methods: The original search results from Embase and MEDLINE and the first 1000 results of GS for librarianmediated SR searches were recorded. Once the inclusion-exclusion process for the resulting SR was complete, search results from all three databases were screened for the SR's included references. All three databases were then searched post hoc for included references not found in the original search results. Results: We checked 4795 included references from 120 SRs against the original search results. Coverage of GS was high (97.2 %) but marginally lower than Embase and MEDLINE combined (97.5 %). MEDLINE on its own achieved 92.3 % coverage. Total recall of Embase/MEDLINE combined was 81.6 % for all included references, compared to GS at 72.8 % and MEDLINE alone at 72.6 %. However, only 46.4 % of the included references were among the downloadable first 1000 references in GS. When examining data for each SR, the traditional databases' recall was better than GS, even when taking into account included references listed beyond the first 1000 search results. Finally, precision of the first 1000 references of GS is comparable to searches in Embase and MEDLINE combined. Conclusions: Although overall coverage and recall of GS are high for many searches, the database does not achieve full coverage as some researchers found in previous research. Further, being able to view only the first 1000 records in GS severely reduces its recall percentages. If GS would enable the browsing of records beyond the first 1000, its recall would increase but not sufficiently to be used alone in SR searching. Time needed to screen results would also increase considerably. These results support our assertion that neither GS nor one of the other databases investigated, is on its own, an acceptable database to support systematic review searching.
Journal of the Medical Library Association : JMLA, 2014
Background: Since 2005, International Committee of Medical Journal Editors (ICMJE) member journals have required that clinical trials be registered in publicly available trials registers before they are considered for publication. Objectives: The research explores whether it is adequate, when searching to inform systematic reviews, to search for relevant clinical trials using only public trials registers and to identify the optimal search approaches in trials registers. Methods: A search was conducted in ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP) for research studies that had been included in eight systematic reviews. Four search approaches (highly sensitive, sensitive, precise, and highly precise) were performed using the basic and advanced interfaces in both resources.
Journal of the Medical Library Association, 2022
Objective: The National Library of Medicine (NLM) inaugurated a “publication type” concept to facilitate searches for systematic reviews (SRs). On the other hand, clinical queries (CQs) are validated search strategies designed to retrieve scientifically sound, clinically relevant original and review articles from biomedical literature databases. We compared the retrieval performance of the SR publication type (SR[pt]) against the most sensitive CQ for systematic review articles (CQrs) in PubMed. Methods: We ran date-limited searches of SR[pt] and CQrs to compare the relative yield of articles and SRs, focusing on the differences in retrieval of SRs by SR[pt] but not CQrs (SR[pt] NOT CQrs) and CQrs NOT SR[pt]. Random samples of articles retrieved in each of these comparisons were examined for SRs until a consistent pattern became evident. Results: For SR[pt] NOT CQrs, the yield was relatively low in quantity but rich in quality, with 79% of the articles being SRs. For CQrs NOT SR[pt]...
2013
Background: The usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here by testing whether GS can locate all studies included in 21 previously published SRs. Second, it examines the recall of GS, taking into account the maximum number of items that can be viewed, and tests whether more complete searches created by an information specialist will improve recall compared to the searches used in the 21 published SRs. Methods: The authors identified 21 biomedical SRs that had used GS and PubMed as information sources and reported their use of identical, reproducible search strategies in both databases. These search strategies were rerun in GS and PubMed, and analyzed as to their coverage and recall. Efforts were made to improve searches that underperformed in each database. Results: GS' overall coverage was higher than PubMed (98% versus 91%) and overall recall is higher in GS: 80% of the references included in the 21 SRs were returned by the original searches in GS versus 68% in PubMed. Only 72% of the included references could be used as they were listed among the first 1,000 hits (the maximum number shown). Practical precision (the number of included references retrieved in the first 1,000, divided by 1,000) was on average 1.9%, which is only slightly lower than in other published SRs. Improving searches with the lowest recall resulted in an increase in recall from 48% to 66% in GS and, in PubMed, from 60% to 85%. Conclusions: Although its coverage and precision are acceptable, GS, because of its incomplete recall, should not be used as a single source in SR searching. A specialized, curated medical database such as PubMed provides experienced searchers with tools and functionality that help improve recall, and numerous options in order to optimize precision. Searches for SRs should be performed by experienced searchers creating searches that maximize recall for as many databases as deemed necessary by the search expert.
Journal of the Medical Library Association
Objective: The aim of this study was to investigate if the included references in a set of completed systematic reviews are indexed in Ovid MEDLINE and Ovid Embase, and how many references would be missed if we were to constrict our literature searches to one of these sources, or the two databases in combination. Methods: We conducted a cross-sectional study where we searched for each included reference (n = 4,709) in 274 reviews produced by the Norwegian Institute of Public Health to find out if the references were indexed in the respective databases. The data was recorded in an Excel spreadsheet where we calculated the indexing rate. The reviews were sorted into eight categories to see if the indexing rate differs from subject to subject. Results: The indexing rate in MEDLINE (86.6%) was slightly lower than in Embase (88.2%). Without the MEDLINE records in Embase, the indexing rate in Embase was 71.8%. The highest indexing rate was achieved by combining both databases (90.2%). Th...