The reference library identified by the comprehensive literature search will still comprise non-relevant references. Thus, these references must be screened for eligibility based on titles and abstracts. At least two reviewers ought to sort these references independently from each other (i.e., blinded to the decision of the other reviewer). Discrepancies between the two reviewers can either be resolved by discussion or by a third reviewer. Eligible references will be retrieved for full text analysis and potentially data extraction.
A variety of online tools is available for abstract screening, among them Covidence, Rayyan, or SyRF. These tools have individual strengths and weaknesses: Covidence empowers the integrity of the systematic review process by a variety of features to also support data extraction and specific role assignment. However, it is not free of charge and lacks flexibility during certain review steps. Rayyan, a web and mobile app, enables simple abstract sorting for systematic reviews. The features which are most commonly used by systematic review authors are free of charge, yet more advanced tools come with a subscription-based model. It can only be used for abstract sorting and has no prescribed workflow for the systematic review process. SyRF (systematic review facility) is an open-access web interface supporting the systematic review process, e.g., for abstract screening. It also entails annotation questions to further classify potential references.
Screening the titles and abstracts of identified studies for eligibility is among the most labor-intensive steps of a systematic review. With this, for large reference libraries with >20’000 references, manual screening becomes unfeasible. Yet data-driven approaches using artificial intelligence-based methods can be leveraged to curate such big data at scale. Several tools to expedite abstract sorting from large bodies of literature have been developed and tested, mostly relying on active-learning frameworks. Inefficient abstract sorting can make this step even more time-consuming than necessary. It is important to not forfeit too much time on individual records in case of unclear eligibility. If sufficiently in doubt, a reference should be included for full text screening.
Pitfalls
Insufficient a priori definition of in- and exclusion criteria can compromise efficient abstract sorting, particularly when discussing potential conflicts between reviewers after completion of screening. Thus, to calibrate all reviewers to the same in- and exclusion criteria, we recommend to conduct a pilot screening round for 100 or so abstracts among all reviewers and to potentially refine in- and exclusion criteria based on discussion points.