A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, and summarizes interpretations into a refined conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.
While a systematic review may be applied in the biomedical or health care context, it may also be used where an assessment of a precisely defined subject can advance understanding in a field of research. A systematic review may examine clinical tests, public health interventions, environmental interventions, social interventions, adverse effects, qualitative evidence syntheses, methodological reviews, policy reviews, and economic evaluations.
An understanding of systematic reviews and how to implement them in practice is common for professionals in health care, public health, and public policy.
A systematic review can be designed to provide a thorough summary of current literature relevant to a research question. A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence.
Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library. As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated.
The EPPI-Centre, Cochrane and the Joanna Briggs Institute have all been influential in developing methods for combining both qualitative and quantitative research in systematic reviews. Several reporting guidelines exist to standardise reporting about how systematic reviews are conducted. Such reporting guidelines are not quality assessment or appraisal tools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide. Several specialized PRISMA guideline extensions have been developed to support particular types of studies or aspects of the review process, including PRISMA-P for review protocols and PRISMA-ScR for scoping reviews. A list of PRISMA guideline extensions is hosted by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network.
For qualitative reviews, reporting guidelines include ENTREQ (Enhancing transparency in reporting the synthesis of qualitative research) for qualitative evidence syntheses; RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) for meta-narrative and realist reviews; and eMERGe (Improving reporting of Meta-Ethnography) for meta-ethnograph.
Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of variation in methods and heterogeneity existing on some subjects.
There are over 30 types of systematic review and Table 1 below summarises some of these, but it is not exhaustive. It is important to note that there is not always consensus on the boundaries and distinctions between the approaches described below.
Table 1: A summary of some of the types of systematic review
Mapping review/systematic map
A mapping review maps existing literature and categorizes data. The method characterizes quantity and quality of literature, including by study design and other features. Mapping reviews can be used to identify the need for primary or secondary research.
A meta-analysis is a statistical analysis that combines the results of multiple quantitative studies. Using statistical methods, results are combined to provide evidence from multiple studies. The two types of data generally used for meta-analysis in health research are individual participant data and aggregate data (such as odds ratios or relative risks).
Mixed studies review/mixed methods review
Refers to any combination of methods where one significant stage is a literature review (often systematic). It can also refer to a combination of review approaches such as combining quantitative with qualitative research.
This method for integrates or compares findings from qualitative studies. The method can include 'coding' the data and looking for 'themes' or 'constructs' across studies. Multiple authors may improve the 'validity' of the data by potentially reducing individual bias.
An assessment of what is already known about a policy or practice issue, which uses systematic review methods to search for and critically appraise existing research. Rapid reviews are still a systematic review, however parts of the process may be simplified or omitted in order to increase rapidity. Rapid reviews were used during the COVID-19 pandemic.
A systematic search for data, using a repeatable method. It includes appraising the data (for example the quality of the data) and a synthesis of research data.
Systematic search and review
Combines methods from a 'critical review' with a comprehensive search process. This review type is usually used to address broad questions to produce the most appropriate evidence synthesis. This method may or may not include quality assessment of data sources.
Include elements of systematic review process, but searching is often not as comprehensive as a systematic review and may not include quality assessments of data sources.
Scoping reviews are distinct from systematic reviews in several important ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry. This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan. A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest. This process is further complicated if it is mapping concepts across multiple languages or cultures.
As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad, for example, exploring how the public are involved in all stages systematic reviews.
There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new. There have been several attempts to improve the standardisation of the method, for example via a PRISMA guideline extension for scoping reviews (PRISMA-ScR).PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews, although some journals will publish protocols for scoping reviews.
While there are multiple kinds of systematic review methods, the main stages of a review can be summarised as follows:
Defining the research question
Defining an answerable question and agreeing an objective method is required to design a useful systematic review. Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency, and consistency between methodology and protocol. Clinical reviews of quantitative data are often structured using the acronym PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison' and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews PICo is 'Population or Problem', 'Interest' and 'Context'.
Searching for relevant data sources
Planning how the review will search for relevant data from research that matches certain criteria is a decisive stage in developing a rigorous systematic review. Relevant criteria can include only selecting research that is good quality and answers the defined question. The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria.
The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against pre-determined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, or the high-quality standards of Cochrane.
Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called 'pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'), and directly contacting experts in the field.
To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching, Boolean operators, proximity searching, while attempting to balance the sensitivity (systematicity) and precision (accuracy). Inviting and involving an experienced information professional or librarian can notably improve the quality of systematic review search strategies and reporting.
'Extraction' of relevant data
A visualisation of data being 'extracted' and 'combined' in a Cochrane intervention effect review where a meta-analysis is possible
Relevant data are 'extracted' from the data sources according to the review method. It is important to note that the data extraction method is specific to the kind of data, and data extracted on 'outcomes' is only relevant to certain types of reviews. For example, a systematic review of clinical trials might extract data about how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example funding sources) and what happened (the outcomes). Effectively, relevant data being extracted and 'combined' in a Cochrane intervention effect review, where a meta-analysis is possible.
Assess the eligibility of the data
This stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage. This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process including text mining tools and machine learning, which can automate aspects of the process. The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.
Analyse and combine the data
Analysing and combining data can provide an overall result from all the data. Because this combined result may use qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. A review may also bring together the findings from quantitative and qualitative studies in a mixed methods or overarching synthesis. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram). In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included.
An example of a 'forest plot' is the Cochrane Collaboration logo. The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child.
Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis. The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions.
Assessing the quality (or certainty) of evidence is an important part of some reviews. GRADE (Grading of Recommendations, Assessment, Development and Evaluations) is a transparent framework for developing and presenting summaries of evidence and is used to grade the quality of evidence. The GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) is used to provide a transparent method for assessing the confidence of evidence from reviews or qualitative research.
Communication and dissemination
Once these stages are complete, the review may be published, disseminated and translated into practice after being adopted as evidence. The UK National Institute for Health Research (NIHR) defines dissemination as ‘getting the findings of research to the people who can make use of them to maximise the benefit of the research without delay’. However, many evidence users do not have time to invest in reading large and complex documents and/or may lack awareness or be unable to access newly published research. Researchers are therefore developing skills to use creative communication methods such as illustrations, blogs, infographics and board games to share the findings of systematic reviews.
Automation of systematic reviews
Living systematic reviews are a relatively new kind of high quality, semi-automated, up-to-date online summaries of research which are updated as new research becomes available. The essential difference between a living systematic review and a conventional systematic review is the publication format. Living systematic reviews are 'dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently'.
While living systematic reviews seek to maintain current evidence, the automation or semi-automation of the systematic process itself is increasingly being explored. While little evidence exists to demonstrate it is as accurate or involves less manual effort, efforts that promote training and using artificial intelligence for the process are increasing.
Medicine and human health
History of systematic reviews in medicine
A 1904 British Medical Journal paper by Karl Pearson collated data from several studies in the UK, India and South Africa of typhoid inoculation. He used a meta-analytic approach to aggregate the outcomes of multiple clinical studies. In 1972 Archie Cochrane wrote: 'It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials'. Critical appraisal and synthesis of research findings in a systematic way emerged in 1975 under the term 'meta analysis'. Early syntheses were conducted in broad areas of public policy and social interventions, with systematic research synthesis applied to medicine and health. Inspired by his own personal experiences as a senior medical officer in prisoner of war camps, Archie Cochrane worked to improve how the scientific method was used in medical evidence, writing in 1971: 'the general scientific problem with which we are primarily concerned is that of testing a hypothesis that a certain treatment alters the natural history of a disease for the better'. His call for the increased use of randomised controlled trials and systematic reviews led to the creation of The Cochrane Collaboration, which was founded in 1993 and named after him, building on the work by Iain Chalmers and colleagues in the area of pregnancy and childbirth.
Current use of systematic reviews in medicine
Many organisations around the world use systematic reviews, with the methodology depending on the guidelines being followed. Organisations which use systematic reviews in medicine and human health include the National Institute for Health and Care Excellence (NICE, UK), the Agency for Healthcare Research and Quality (AHRQ, USA) and the World Health Organization. Most notable among international organisations is Cochrane, a group of over 37,000 specialists in healthcare who systematically review randomised trials of the effects of prevention, treatments and rehabilitation as well as health systems interventions. When appropriate, they also include the results of other types of research. Cochrane Reviews are published in The Cochrane Database of Systematic Reviews section of the Cochrane Library. The 2015 impact factor for The Cochrane Database of Systematic Reviews was 6.103, and it was ranked 12th in the Medicine, General & Internal category.
Intervention reviews assess the benefits and harms of interventions used in healthcare and health policy.
Diagnostic test accuracy reviews assess how well a diagnostic test performs in diagnosing and detecting a particular disease. For conducting diagnostic test accuracy reviews, free software such as MetaDTA and CAST-HSROC in the graphical user interface is available.
Methodology reviews address issues relevant to how systematic reviews and clinical trials are conducted and reported.
Qualitative reviews synthesize qualitative evidence to address questions on aspects other than effectiveness.
Prognosis reviews address the probable course or future outcome(s) of people with a health problem.
Overviews of Systematic Reviews (OoRs) are a new type of study to compile multiple evidence from systematic reviews into a single document that is accessible and useful to serve as a friendly front end for the Cochrane Collaboration with regard to healthcare decision-making. These are sometimes referred to as 'umbrella reviews'.
Living Systematic reviews are continually updated, incorporating relevant new evidence as it becomes available. They are a relatively new kind of review, with methods still being developed and evaluated. They can be high quality, semi-automated, up-to-date online summaries of research which are updated as new research becomes available. The essential difference between a 'living systematic review' and a conventional systematic review is the publication format. Living systematic reviews are 'dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently'.
Rapid reviews are a form of knowledge synthesis that 'accelerates the process of conducting a traditional systematic review through streamlining or omitting specific methods to produce evidence for stakeholders in a resource-efficient manner'.
Reviews of complex health interventions in complex systems review interventions and interventions delivered in complex systems to improve evidence synthesis and guideline development at a global, national or health systems level.
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions which 'provides guidance to authors for the preparation of Cochrane Intervention reviews.' The Cochrane Handbook also outlines the key steps for preparing a systematic review and forms the basis of two sets of standards for the conduct and reporting of Cochrane Intervention Reviews (MECIR - Methodological Expectations of Cochrane Intervention Reviews). It also contains guidance on how to undertake qualitative evidence synthesis, economic reviews and integrating patient-reported outcomes into reviews.
The Cochrane Library is a collection of databases that contains different types of independent evidence to inform healthcare decision-making. It contains a database of systematic review and meta-analyses which summarize and interpret the results of multi-disciplinary research. The library contains the Cochrane Database of Systematic Reviews (CDSR), which is a journal and database for systematic reviews in health care. The Cochrane Library also contains the Cochrane Central Register of Controlled Trials (CENTRAL) which is a database of reports of randomized and quasi-randomized controlled trials. The Cochrane Library is also available in Spanish.
The Cochrane Library is owned by Cochrane. It was originally published by Update Software and now published by the share-holder owned publisher John Wiley & Sons, Ltd. as part of Wiley Online Library. Royalties from sales of the Cochrane Library are the major source of funds for Cochrane (over £6 million in 2017). There are 3.66 billion people around the world who have access to the Library through national licences (national licences cost £1.5 billion) or free provision for populations in low- and middle-income countries eligible under the WHO's HINARI initiative. Authors must pay an additional fee for their review to be truly open access. Cochrane has an annual income of $10m USD.
Public involvement and citizen science in systematic reviews
Cochrane has several tasks that the public or other 'stakeholders' can be involved in doing, associated with producing systematic reviews and other outputs. Tasks can be organised as 'entry level' or higher. Tasks include:
Joining a collaborative volunteer effort to help categorise and summarise healthcare evidence
Data extraction and risk of bias assessment
Translation of reviews into other languages
A recent systematic review of how people were involved in systematic reviews aimed to document the evidence-base relating to stakeholder involvement in systematic reviews and to use this evidence to describe how stakeholders have been involved in systematic reviews. Thirty percent involved patients and/or carers. The ACTIVE framework provides a way to consistently describe how people are involved in systematic review, and may be used as a way to support the decision-making of systematic review authors in planning how to involve people in future reviews. Standardised Data on Initiatives (STARDIT) is another proposed way of reporting who has been involved in which tasks during research, including systematic reviews.
While there has been some criticism of how Cochrane prioritises systematic reviews, a recent project involved people in helping identify research priorities to inform future Cochrane Reviews. In 2014, the Cochrane-Wikipedia partnership was formalised. This supports the inclusion of relevant evidence within all Wikipedia medical articles, as well as other processes to help ensure that medical information included in Wikipedia is of the highest quality and accuracy.
Cochrane has produced many learning resources to help people understand what systematic reviews are, and how to do them. Most of the learning resources can be found at the 'Cochrane Training' webpage, which also includes a link to the book Testing Treatments, which has been translated into many languages. In addition, Cochrane has created a short video What are Systematic Reviews which explains in plain English how they work and what they are used for. The video has been translated into multiple languages, and viewed over 192,282 times (as of August 2020). In addition, an animated storyboard version was produced and all the video resources were released in multiple versions under Creative Commons for others to use and adapt. The Critical Appraisal Skills Programme (CASP) provides free learning resources to support people to appraise research critically, including a checklist which contains 10 questions to 'help you make sense of a systematic review'.
Social, behavioural and educational
In 1959, social scientist and social work educator Barbara Wootton published one of the first contemporary systematic reviews of literature on anti-social behavior as part of her work, Social Science and Social Pathology.
Several organisations use systematic reviews in social, behavioural, and educational areas of evidence-based policy, including the National Institute for Health and Care Excellence (NICE, UK), Social Care Institute for Excellence (SCIE, UK), the Agency for Healthcare Research and Quality (AHRQ, USA), the World Health Organization, the International Initiative for Impact Evaluation (3ie), the Joanna Briggs Institute and the Campbell Collaboration. The quasi-standard for systematic review in the social sciences is based on the procedures proposed by the Campbell Collaboration, which is one of several groups promoting evidence-based policy in the social sciences. The Campbell Collaboration: 'helps people make well-informed decisions by preparing, maintaining and disseminating systematic reviews in education, crime and justice, social welfare and international development.' The Campbell Collaboration is a sibling initiative of Cochrane, and was created in 2000 at the inaugural meeting in Philadelphia, USA, attracting 85 participants from 13 countries.
Business and economics
Due to the different nature of research fields outside of the natural sciences, the aforementioned methodological steps cannot easily be applied in all areas of business research. Some attempts to transfer the procedures from medicine to business research have been made, including a step-by-step approach, and developing a standard procedure for conducting systematic literature reviews in business and economics. The Campbell & Cochrane Economics Methods Group (C-CEMG) works to improve the inclusion of economic evidence into Cochrane and Campbell systematic reviews of interventions, to enhance the usefulness of review findings as a component for decision-making. Such economic evidence is crucial for health technology assessment processes.
International development research
Systematic reviews are increasingly prevalent in other fields, such as international development research. Subsequently, several donors (including the UK Department for International Development (DFID) and AusAid) are focusing more attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.
The Collaboration for Environmental Evidence (CEE) works to achieve a sustainable global environment and the conservation of biodiversity. The CEE has a journal titled Environmental Evidence which publishes systematic reviews, review protocols and systematic maps on impacts of human activity and the effectiveness of management interventions.
Environmental health and toxicology
Systematic reviews are a relatively recent innovation in the field of environmental health and toxicology. Although mooted in the mid-2000s, the first full frameworks for conduct of systematic reviews of environmental health evidence were only published in 2014 by the US National Toxicology Program's Office of Health Assessment and Translation and the Navigation Guide at the University of California San Francisco's Program on Reproductive Health and the Environment. Uptake has since been rapid, with the estimated number of systematic reviews in the field doubling since 2016 and the first consensus recommendations on best practice, as a precursor to a more general standard, being published in 2020.
A 2019 publication identified 15 systematic review tools and ranked them according to the number of 'critical features' as required to perform a systematic review, including:
DistillerSR: a proprietary, paid web application
Swift Active Screener: a proprietary, paid web application
Covidence: a proprietary, paid web application and Cochrane technology platform.
Rayyan: a proprietary, free of charge web application
Sysrev: a proprietary, freemium web application
While systematic reviews involve a highly rigorous approach to synthesizing the evidence, they still have several limitations.
Out-dated or risk of bias
While systematic reviews are regarded as the strongest form of evidence, a 2003 review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting can be improved by a universally agreed upon set of standards and guidelines. A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine. A 2003 study suggested that extending searches beyond major databases, perhaps into grey literature, would increase the effectiveness of reviews.
Some authors have highlighted problems with systematic reviews, particularly those conducted by Cochrane, noting that published reviews are often biased, out of date and excessively long. Cochrane reviews have been criticized as not being sufficiently critical in the selection of trials and including too many of low quality. They proposed several solutions, including limiting studies in meta-analyses and reviews to registered clinical trials, requiring that original data be made available for statistical checking, paying greater attention to sample size estimates, and eliminating dependence on only published data.
Some of these difficulties were noted as early as 1994:
much poor research arises because researchers feel compelled for career reasons to carry out research that they are ill-equipped to perform, and nobody stops them.
Methodological limitations of meta-analysis have also been noted. Another concern is that the methods used to conduct a systematic review are sometimes changed once researchers see the available trials they are going to include. Some websites have described retractions of systematic reviews and published reports of studies included in published systematic reviews. Eligibility criteria must be justifiable and not arbitrary (for example, the date range searched) as this may affect the perceived quality of the review.
Limited reporting of clinical trials and data from human studies
The 'AllTrials' campaign highlights that around half of clinical trials have never reported results and works to improve reporting. This lack of reporting has extremely serious implications for research, including systematic reviews, as it is only possible to synthesize data of published studies. In addition, 'positive' trials were twice as likely to be published as those with 'negative' results. At present, it is legal for-profit companies to conduct clinical trials and not publish the results. For example, in the past 10 years 8.7 million patients have taken part in trials that have not published results. These factors mean that it is likely there is a significant publication bias, with only 'positive' or perceived favourable results being published. A recent systematic review of industry sponsorship and research outcomes concluded that 'sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources' and that the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments. Systematic reviews of such a bias may amplify the effect, although it is important to note that the flaw is in the reporting of research generally, not in the systematic review method.
Poor compliance with review reporting guidelines
The rapid growth of systematic reviews in recent years has been accompanied by the attendant issue of poor compliance with guidelines, particularly in areas such as declaration of registered study protocols, funding source declaration, risk of bias data, issues resulting from data abstraction, and description of clear study objectives. A host of studies have identified weaknesses in the rigour and reproducibility of search strategies in systematic reviews. To remedy this issue, a new PRISMA guideline extension called PRISMA-S is being developed to improve the quality, reporting, and reproducibility of systematic review search strategies. Furthermore, tools and checklists for peer-reviewing search strategies have been created, such as the Peer Review of Electronic Search Strategies (PRESS) guidelines.
A key challenge for using systematic reviews in clinical practice and healthcare policy is assessing the quality of a given review. Consequently, a range of appraisal tools to evaluate systematic reviews have been designed. The two most popular measurement instruments and scoring tools for systematic review quality assessment are AMSTAR 2 (a measurement tool to assess the methodological quality of systematic reviews) and ROBIS (Risk Of Bias In Systematic reviews); however, these are not appropriate for all systematic review types.
^Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. (September 2014). "Network meta-analyses could be improved by searching more sources and by involving a librarian". Journal of Clinical Epidemiology. 67 (9): 1001–1007. doi:10.1016/j.jclinepi.2014.04.003. PMID24841794.
^Rethlefsen ML, Murad MH, Livingston EH (September 2014). "Engaging medical librarians to improve the quality of review articles". JAMA. 312 (10): 999–1000. doi:10.1001/jama.2014.9263. PMID25203078.
^Marshall C, Brereton P (27 April 2015). "Systematic review toolbox: a catalogue of tools to support systematic reviews". Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering. EASE '15. Nanjing, China: Association for Computing Machinery: 1–6. doi:10.1145/2745802.2745824. ISBN978-1-4503-3350-4. S2CID6679820.
^Tranfield D, Denyer D, Smart P (2003). "Towards a methodology for developing evidence-informed management knowledge by means of systematic review". British Journal of Management. 14 (3): 207–222. CiteSeerX10.1.1.622.895. doi:10.1111/1467-8551.00375.
^Durach CF, Kembro J, Wieland A (2017). "A New Paradigm for Systematic Literature Reviews in Supply Chain Management". Journal of Supply Chain Management. 53 (4): 67–85. doi:10.1111/jscm.12145.
^"About us". methods.cochrane.org. Retrieved 1 July 2020.
^Whaley P, Aiassa E, Beausoleil C, Beronius A, Bilotta G, Boobis A, et al. (October 2020). "Recommendations for the conduct of systematic reviews in toxicology and environmental health research (COSTER)". Environment International. 143: 105926. doi:10.1016/j.envint.2020.105926. PMID32653802. S2CID220502683.
^Pidgeon TE, Wellstead G, Sagoo H, Jafree DJ, Fowler AJ, Agha RA (October 2016). "An assessment of the compliance of systematic review articles published in craniofacial surgery with the PRISMA statement guidelines: A systematic review". Journal of Cranio-Maxillo-Facial Surgery. 44 (10): 1522–1530. doi:10.1016/j.jcms.2016.07.018. PMID27575881.
^Akhigbe T, Zolnourian A, Bulters D (May 2017). "Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: Review of literature". Journal of Clinical Neuroscience. 39: 45–48. doi:10.1016/j.jocn.2017.02.016. PMID28246008. S2CID27738264.
^E JY, Saldanha IJ, Canner J, Schmid CH, Le JT, Li T (May 2020). "Adjudication rather than experience of data abstraction matters more in reducing errors in abstracting data in systematic reviews". Research Synthesis Methods. 11 (3): 354–362. doi:10.1002/jrsm.1396. PMID31955502. S2CID210829764.
^Golder S, Loke Y, McIntosh HM (May 2008). "Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects". Journal of Clinical Epidemiology. 61 (5): 440–448. doi:10.1016/j.jclinepi.2007.06.005. PMID18394536.