Design and implementation of Metta, a metasearch engine for biomedical literature retrieval intended for systematic reviewers (original) (raw)

Development of an efficient search filter to retrieve systematic reviews from PubMed

Journal of the Medical Library Association, 2021

Objective: Locating systematic reviews is essential for clinicians and researchers when creating or updating reviews and for decision-making in health care. This study aimed to develop a search filter for retrieving systematic reviews that improves upon the performance of the PubMed systematic review search filter. Methods: Search terms were identified from abstracts of reviews published in Cochrane Database of Systematic Reviews and the titles of articles indexed as systematic reviews in PubMed. Both the precision of the candidate terms and the number of systematic reviews retrieved from PubMed were evaluated after excluding the subset of articles retrieved by the PubMed systematic review filter. Terms that achieved a precision greater than 70% and relevant publication types indexed with MeSH terms were included in the filter search strategy. Results: The search strategy used in our filter added specific terms not included in PubMed's systematic review filter and achieved a 61.3% increase in the number of retrieved articles that are potential systematic reviews. Moreover, it achieved an average precision that is likely greater than 80%. Conclusions: The developed search filter will enable users to identify more systematic reviews from PubMed than the PubMed systematic review filter with high precision.

An optimal search filter for retrieving systematic reviews and meta-analyses

BMC Medical Research Methodology, 2012

Background: Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter.

Which Academic Search Systems are Suitable for Systematic Reviews or Meta-Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 other Resources

Research Synthesis Methods, 2019

Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses), because the sample selection of relevant studies determines a review's outcome, validity, and explanatory power. Yet, the search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which search systems are most appropriate for evidence synthesis and why. Advice on which search engines and bibliographic databases to choose for systematic searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic search qualities of 28 widely used academic search systems, including Google Scholar, PubMed and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which search systems can effectively and efficiently perform (Boolean) searches with regards to precision, recall and reproducibility. We found substantial differences in the performance of search systems, meaning that their usability in systematic searches varies. Indeed, only half of the search systems analysed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal search system. We call for database owners to recognise the requirements of evidence synthesis, and for academic journals to reassess quality requirements for systematic reviews. Our findings aim to support researchers in conducting better searches for better evidence synthesis.

Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study

Systematic Reviews, 2016

Background: Previously, we reported on the low recall of Google Scholar (GS) for systematic review (SR) searching. Here, we test our conclusions further in a prospective study by comparing the coverage, recall, and precision of SR search strategies previously performed in Embase, MEDLINE, and GS. Methods: The original search results from Embase and MEDLINE and the first 1000 results of GS for librarianmediated SR searches were recorded. Once the inclusion-exclusion process for the resulting SR was complete, search results from all three databases were screened for the SR's included references. All three databases were then searched post hoc for included references not found in the original search results. Results: We checked 4795 included references from 120 SRs against the original search results. Coverage of GS was high (97.2 %) but marginally lower than Embase and MEDLINE combined (97.5 %). MEDLINE on its own achieved 92.3 % coverage. Total recall of Embase/MEDLINE combined was 81.6 % for all included references, compared to GS at 72.8 % and MEDLINE alone at 72.6 %. However, only 46.4 % of the included references were among the downloadable first 1000 references in GS. When examining data for each SR, the traditional databases' recall was better than GS, even when taking into account included references listed beyond the first 1000 search results. Finally, precision of the first 1000 references of GS is comparable to searches in Embase and MEDLINE combined. Conclusions: Although overall coverage and recall of GS are high for many searches, the database does not achieve full coverage as some researchers found in previous research. Further, being able to view only the first 1000 records in GS severely reduces its recall percentages. If GS would enable the browsing of records beyond the first 1000, its recall would increase but not sufficiently to be used alone in SR searching. Time needed to screen results would also increase considerably. These results support our assertion that neither GS nor one of the other databases investigated, is on its own, an acceptable database to support systematic review searching.

Searching ClinicalTrials.gov and the International Clinical Trials Registry Platform to inform systematic reviews: what are the optimal search approaches?

Journal of the Medical Library Association : JMLA, 2014

Background: Since 2005, International Committee of Medical Journal Editors (ICMJE) member journals have required that clinical trials be registered in publicly available trials registers before they are considered for publication. Objectives: The research explores whether it is adequate, when searching to inform systematic reviews, to search for relevant clinical trials using only public trials registers and to identify the optimal search approaches in trials registers. Methods: A search was conducted in ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP) for research studies that had been included in eight systematic reviews. Four search approaches (highly sensitive, sensitive, precise, and highly precise) were performed using the basic and advanced interfaces in both resources.

Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study

Systematic reviews, 2017

Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases needed to conduct efficient searches in systematic reviews and whether the current practice in published reviews is appropriate. While previous studies determined the coverage of databases, we analyzed the actual retrieval from the original searches for systematic reviews. Since May 2013, the first author prospectively recorded results from systematic review searches that he performed at his institution. PubMed was used to identify systematic reviews published using our search strategy results. For each published systematic review, we extracted the references of the included studies. Using the prospectively recorded results and the studies included in the publications, we calculated recall, precision, a...

Leave No Stone Unturned: Introducing a Revolutionary Meta-search Tool for Rigorous and Efficient Systematic Literature Searches

Proceedings of the 23rd European Conference on Information Systems (ECIS 2015), 2015

A rigorous and systematic search that uncovers relevant literature is a crucial part of any research project. However, finding literature in the information systems (IS) field is a complex, time-consuming and error-prone task. Due to the interdisciplinary nature of the IS field, research contributions are published in a wide variety of outlets (i.e., journals and conference proceedings) from a diverse set of disciplines (e.g., computer sciences, economics, management, sociology, medical sciences). These outlets are dispersed over numerous literature databases, each with its own functionalities, peculiarities, and constraints. To address this issue, we developed LitSonar. LitSonar is a revolutionary meta-search engine for academic literature which consolidates search results from several literature databases. LitSonar aims to improve the quality of literature reviews by enhancing rigour and efficiency of literature searches. Following the design science research paradigm, LitSonar is developed with an incremental development approach consisting of multiple design cycles of artefact creation/refinement and qualitative/quantitative evaluation. In this prototype paper, we present the overall design objectives as well as implementation details of the current prototype. In doing so, this paper can help researchers in evaluating approaches towards developing novel solutions to improve efficiency and rigour of literature searches.

Optimizing the literature search: coverage of included references in systematic reviews in Medline and Embase

Journal of the Medical Library Association

Objective: The aim of this study was to investigate if the included references in a set of completed systematic reviews are indexed in Ovid MEDLINE and Ovid Embase, and how many references would be missed if we were to constrict our literature searches to one of these sources, or the two databases in combination. Methods: We conducted a cross-sectional study where we searched for each included reference (n = 4,709) in 274 reviews produced by the Norwegian Institute of Public Health to find out if the references were indexed in the respective databases. The data was recorded in an Excel spreadsheet where we calculated the indexing rate. The reviews were sorted into eight categories to see if the indexing rate differs from subject to subject. Results: The indexing rate in MEDLINE (86.6%) was slightly lower than in Embase (88.2%). Without the MEDLINE records in Embase, the indexing rate in Embase was 71.8%. The highest indexing rate was achieved by combining both databases (90.2%). Th...