Current methodologies for designing search strategies rely heavily on the knowledge and expertise of information specialists. Yet, the volume and complexity of scientific literature is overwhelming for even the most experienced information specialists, making it difficult to produce robust search strategies for complex systematic reviews. In this case study, we aimed to assess and describe the benefits and limitations of using semi-automated text-mining tools for designing search strategies in a systematic review of diagnostic test accuracy. An experienced information specialist designed a search strategy using traditional methods. This strategy was then amended to include additional terms identified by text-mining tools. We evaluated the usability and expertise required, risk of introducing bias to the search, precision of the search strategy and rated the usefulness of the tools. Thirteen of the 16 investigated tools produced a total of 40 additional terms, beyond those in the original search strategy. This resulted in 11 previously unidentified relevant articles being retrieved. Precision was reduced or remained the same in all cases. After considering all aspects of the investigation we rated each application, with two being 'extremely useful', three being 'useful', three having 'no impact' and eight being 'not very useful'. Comparative analysis revealed discrepancies between similar tools. Our findings have implications for the way in which these methodologies are used and applied to search strategies. If semi-automated techniques are to become mainstream in information retrieval for complex systematic reviews, we need tailored tools that fit information specialists' requirements across disciplines.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088010 | PMC |
http://dx.doi.org/10.1002/jrsm.1593 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!