Summary of How to Read Systematic Reviews
Community child health, public health, and epidemiology
Understanding systematic reviews and meta-assay
Abstruse
This review covers the basic principles of systematic reviews and meta-analyses. The problems associated with traditional narrative reviews are discussed, as is the part of systematic reviews in limiting bias associated with the assembly, critical appraisal, and synthesis of studies addressing specific clinical questions. Of import bug that need to exist considered when appraising a systematic review or meta-analysis are outlined, and some of the terms used in the reporting of systematic reviews and meta-analyses—such every bit odds ratio, relative chance, conviction interval, and the forest plot—are introduced.
- RCT, randomised controlled trial
- systematic review
- meta-analysis
- narrative review
- disquisitional appraisal
Statistics from Altmetric.com
- RCT, randomised controlled trial
- systematic review
- meta-analysis
- narrative review
- critical appraisal
Health care professionals are increasingly required to base of operations their practise on the all-time available evidence. In the first article of the series, I described basic strategies that could be used to search the medical literature.1 After a literature search on a specific clinical question, many articles may be retrieved. The quality of the studies may be variable, and the individual studies might have produced alien results. It is therefore important that health care decisions are not based solely on one or two studies without account being taken of the whole range of enquiry information available on that topic.
Health intendance professionals have ever used review articles every bit a source of summarised evidence on a item topic. Review articles in the medical literature accept traditionally been in the form of "narrative reviews" where experts in a particular field provide what is supposed to exist a "summary of show" in that field. Narrative reviews, although all the same very common in the medical field, have been criticised considering of the loftier take chances of bias, and "systematic reviews" are preferred.2 Systematic reviews apply scientific strategies in ways that limit bias to the assembly, a disquisitional appraisal, and synthesis of relevant studies that address a specific clinical question.2
THE Trouble WITH TRADITIONAL REVIEWS
The validity of a review article depends on its methodological quality. While traditional review manufactures or narrative reviews tin can exist useful when conducted properly, at that place is evidence that they are usually of poor quality. Authors of narrative reviews often use informal, subjective methods to collect and interpret studies and tend to be selective in citing reports that reinforce their preconceived ideas or promote their own views on a topic.3, 4 They are also rarely explicit almost how they selected, assessed, and analysed the chief studies, thereby not allowing readers to assess potential bias in the review process. Narrative reviews are therefore ofttimes biased, and the recommendations made may be inappropriate.5
WHAT IS A SYSTEMATIC REVIEW?
In contrast to a narrative review, a systematic review is a form of research that provides a summary of medical reports on a specific clinical question, using explicit methods to search, critically appraise, and synthesise the world literature systematically.vi It is specially useful in bringing together a number of separately conducted studies, sometimes with conflicting findings, and synthesising their results.
Past providing in a clear explicit fashion a summary of all the studies addressing a specific clinical question,4 systematic reviews allow u.s.a. to accept account of the whole range of relevant findings from research on a particular topic, and not merely the results of one or two studies. Other advantages of systematic reviews have been discussed by Mulrow.7 They can be used to establish whether scientific findings are consistent and generalisable across populations, settings, and handling variations, or whether findings vary significantly by particular subgroups. Moreover, the explicit methods used in systematic reviews limit bias and, hopefully, volition improve reliability and accuracy of conclusions. For these reasons, systematic reviews of randomised controlled trials (RCTs) are considered to be bear witness of the highest level in the hierarchy of research designs evaluating effectiveness of interventions.8
METHODOLOGY OF A SYSTEMATIC REVIEW
The need for rigour in the training of a systematic review means that there should be a formal process for its conduct. Effigy one summarises the process for conducting a systematic review of RCTs.ix This includes a comprehensive, exhaustive search for master studies on a focused clinical question, selection of studies using clear and reproducible eligibility criteria, critical appraisal of primary studies for quality, and synthesis of results co-ordinate to a predetermined and explicit method.3, 9
WHAT IS A META-Analysis?
Following a systematic review, data from individual studies may be pooled quantitatively and reanalysed using established statistical methods.x This technique is called meta-assay. The rationale for a meta-analysis is that, by combining the samples of the individual studies, the overall sample size is increased, thereby improving the statistical ability of the assay besides equally the precision of the estimates of treatment furnishings.11
Meta-analysis is a 2 stage procedure.12 The beginning stage involves the calculation of a measure out of treatment effect with its 95% confidence intervals (CI) for each individual study. The summary statistics that are usually used to measure treatment outcome include odds ratios (OR), relative risks (RR), and take chances differences.
In the second phase of meta-analysis, an overall treatment consequence is calculated as a weighted average of the private summary statistics. Readers should note that, in meta-analysis, data from the individual studies are non simply combined as if they were from a single written report. Greater weights are given to the results from studies that provide more information, because they are likely to exist closer to the "true consequence" we are trying to approximate. The weights are ofttimes the inverse of the variance (the square of the standard error) of the handling effect, which relates closely to sample size.12 The typical graph for displaying the results of a meta-assay is chosen a "forest plot".thirteen
The forest plot
The plot shows, at a glance, information from the private studies that went into the meta-analysis, and an gauge of the overall results. It too allows a visual assessment of the amount of variation betwixt the results of the studies (heterogeneity). Figure 2 shows a typical forest plot. This effigy is adapted from a recent systematic review and meta-analysis which examined the efficacy of probiotics compared with placebo in the prevention and treatment of diarrhoea associated with the utilize of antibiotics.14
Description of the forest plot
In the forest plot shown in fig 2, the results of nine studies accept been pooled. The names on the left of the plot are the first authors of the principal studies included. The black squares correspond the odds ratios of the individual studies, and the horizontal lines their 95% confidence intervals. The surface area of the black squares reflects the weight each trial contributes in the meta-analysis. The 95% confidence intervals would incorporate the true underlying consequence in 95% of the occasions if the written report was repeated once more and again. The solid vertical line corresponds to no effect of treatment (OR = 1.0). If the CI includes i, so the difference in the effect of experimental and control treatment is not significant at conventional levels (p>0.05).15 The overall handling effect (calculated as a weighted boilerplate of the individual ORs) from the meta-analysis and its CI is at the bottom and represented as a diamond. The centre of the diamond represents the combined treatment effect (0.37), and the horizontal tips correspond the 95% CI (0.26 to 0.52). If the diamond shape is on the 50eft of the line of no effect, then Less (fewer episodes) of the outcome of interest is seen in the treatment group. If the diamond shape is on the Right of the line, and then moRe episodes of the outcome of interest are seen in the treatment group. In fig 2, the diamond shape is found on the left of the line of no outcome, meaning that less diarrhoea (fewer episodes) was seen in the probiotic group than in the placebo grouping. If the diamond touches the line of no result (where the OR is 1) and then there is no statistically significant difference betwixt the groups being compared. In fig 2, the diamond shape does not touch the line of no result (that is, the conviction interval for the odds ratio does not include 1) and this means that the divergence constitute between the two groups was statistically significant.
APPRAISING A SYSTEMATIC REVIEW WITH OR WITHOUT META-Assay
Although systematic reviews occupy the highest position in the hierarchy of evidence for articles on effectiveness of interventions,8 it should not be assumed that a study is valid merely because information technology is stated to be an systematic review. But equally in RCTs, the main issues to consider when appraising a systematic review can be condensed into three important areas8:
-
The validity of the trial methodology.
-
The magnitude and precision of the treatment effect.
-
The applicability of the results to your patient or population.
Box one shows a list of 10 questions that may be used to appraise a systematic review in all 3 areas.16
Box ane: Questions to consider when appraising a systematic review16
-
Did the review address a conspicuously focused question?
-
Did the review include the right type of study?
-
Did the reviewers try to identify all relevant studies?
-
Did the reviewers assess the quality of all the studies included?
-
If the results of the study take been combined, was information technology reasonable to practice then?
-
How are the results presented and what are the principal results?
-
How precise are the results?
-
Tin can the results be applied to your local population?
-
Were all important outcomes considered?
-
Should practice or policy modify as a result of the evidence contained in this review?
ASSESSING THE VALIDITY OF TRIAL METHODOLOGY
Focused research question
Like all research reports, the authors should clearly state the research question at the start. The research question should include the relevant population or patient groups beingness studied, the intervention of interest, whatever comparators (where relevant), and the outcomes of interest. Keywords from the research question and their synonyms are ordinarily used to identify studies for inclusion in the review.
Types of studies included in the review
The validity of a systematic review or meta-analysis depends heavily on the validity of the studies included. The authors should explicitly state the type of studies they have included in their review, and readers of such reports should decide whether the included studies have the appropriate report design to answer the clinical question. In a contempo systematic review which determined the furnishings of glutamine supplementation on morbidity and weight gain in preterm babies the investigators based their review only on RCTs.17
Search strategy used to identify relevant articles
There is prove that single electronic database searches lack sensitivity and relevant articles may be missed if just one database is searched. Dickersin et al showed that just xxx–80% of all known published RCTs were identifiable using MEDLINE.xviii Even if relevant records are in a database, it can be difficult to retrieve them hands. A comprehensive search is therefore of import, not merely for ensuring that every bit many studies as possible are identified but as well to minimise selection bias for those that are found. Relying exclusively on one database may retrieve a ready of studies that are unrepresentative of all studies that would take been identified through a comprehensive search of multiple sources. Therefore, in gild to call up all relevant studies on a topic, several dissimilar sources should exist searched to identify relevant studies (published and unpublished), and the search strategy should not be express to the English language linguistic communication. The aim of an extensive search is to avoid the problem of publication bias which occurs when trials with statistically meaning results are more likely to be published and cited, and are preferentially published in English language journals and those indexed in Medline.
In the systematic review referred to higher up, which examined the furnishings of glutamine supplementation on morbidity and weight gain in preterm babies, the authors searched the Cochrane controlled trials register, Medline, and Embase,17 and they also hand searched selected journals, cross referencing where necessary from other publications.
Quality assessment of included trials
The reviewers should country a predetermined method for assessing the eligibility and quality of the studies included. At to the lowest degree ii reviewers should independently assess the quality of the included studies to minimise the run a risk of choice bias. In that location is evidence that using at least two reviewers has an of import effect on reducing the possibility that relevant reports will exist discarded.xix
Pooling results and heterogeneity
If the results of the individual studies were pooled in a meta-analysis, it is important to determine whether it was reasonable to do so. A clinical sentence should be fabricated nearly whether it was reasonable for the studies to be combined based on whether the private trials differed considerably in populations studied, interventions and comparisons used, or outcomes measured.
The statistical validity of combining the results of the diverse trials should exist assessed past looking for homogeneity of the outcomes from the various trials. In other words, there should exist some consistency in the results of the included trials. One way of doing this is to inspect the graphical display of results of the individual studies (forest plot, see in a higher place) looking for similarities in the direction of the results. When the results differ profoundly in their direction—that is, if there is pregnant heterogeneity—then it may not be wise for the results to be pooled. Some articles may also written report a statistical test for heterogeneity, simply information technology should be noted that the statistical power of many meta-analyses is usually besides depression to allow the detection of heterogeneity based on statistical tests. If a report finds significant heterogeneity among reports, the authors should try to offer explanations for potential sources of the heterogeneity.
Magnitude of the treatment effect
Common measures used to written report the results of meta-analyses include the odds ratio, relative risk, and mean differences. If the outcome is binary (for example, illness v no affliction, remission v no remission), odds ratios or relative risks are used. If the consequence is continuous (for example, blood pressure measurement), hateful differences may exist used.
ODDS RATIOS AND RELATIVE RISKS
Odds and odds ratio
The odds for a group is defined as the number of patients in the group who achieve the stated end betoken divided by the number of patients who do not. For case, the odds of acne resolution during handling with an antibody in a group of ten patients may be half-dozen to 4 (6 with resolution of acne divided by four without = 1.5); in a control group the odds may be iii to 7 (0.43). The odds ratio, as the proper noun implies, is a ratio of two odds. It is simply defined as the ratio of the odds of the handling grouping to the odds of the command group. In our example, the odds ratio of treatment to control group would be iii.5 (ane.5 divided by 0.43).
Take chances and relative take chances
Risk, as opposed to odds, is calculated as the number of patients in the group who accomplish the stated end point divided by the total number of patients in the grouping. Take chances ratio or relative risk is a ratio of two "risks". In the case above the risks would be 6 in x in the treatment group (6 divided by 10 = 0.six) and 3 in 10 in the control grouping (0.three), giving a risk ratio, or relative risk of 2 (0.6 divided past 0.3).
Interpretation of odds ratios and relative risk
An odds ratio or relative adventure greater than 1 indicates increased likelihood of the stated outcome existence accomplished in the treatment group. If the odds ratio or relative chance is less than 1, there is a decreased likelihood in the treatment group. A ratio of 1 indicates no difference—that is, the outcome is just equally likely to occur in the treatment group every bit it is in the command group.xi As in all estimates of treatment effect, odds ratios or relative risks reported in meta-analysis should be accompanied by confidence intervals.
Readers should understand that the odds ratio volition exist close to the relative risk if the stop point occurs relatively infrequently, say in less than 20%.15 If the outcome is more than common, and so the odds ratio volition considerably overestimate the relative run a risk. The advantages and disadvantages of odds ratios v relative risks in the reporting of the results of meta-analysis take been reviewed elsewhere.12
Precision of the treatment effect: conviction intervals
As stated earlier, confidence intervals should back-trail estimates of treatment effects. I discussed the concept of conviction intervals in the 2d commodity of the series.viii Ninety five per cent conviction intervals are commonly reported, but other intervals such equally xc% or 99% are besides sometimes used. The 95% CI of an estimate (for example, of odds ratios or relative risks) will be the range within which we are 95% certain that the true population treatment effect will lie. The width of a confidence interval indicates the precision of the estimate. The wider the interval, the less the precision. A very long interval makes us less sure nearly the accuracy of a study in predicting the true size of the result. If the confidence interval for relative risk or odds ratio for an guess includes 1, and so nosotros accept been unable to demonstrate a statistically meaning difference between the groups existence compared; if it does non include 1, then we say that there is a statistically significant departure.
APPLICABILITY OF RESULTS TO PATIENTS
Health care professionals should e'er make judgements about whether the results of a detail study are applicative to their own patient or group of patients. Some of the issues that i need to consider before deciding whether to incorporate a particular piece of inquiry evidence into clinical exercise were discussed in the 2nd article of the series.8 These include similarity of study population to your population, benefit 5 harm, patients preferences, availability, and costs.
CONCLUSIONS
Systematic reviews apply scientific strategies to provide in an explicit way a summary of all studies addressing a specific question, thereby allowing an account to be taken of the whole range of relevant findings on a item topic. Meta-analysis, which may accompany a systematic review, can increase power and precision of estimates of treatment furnishings. People working in the field of paediatrics and child health should understand the cardinal principles of systematic reviews and meta-analyses, including the ability to utilize critical appraisal not only to the methodologies of review articles, just too to the applicability of the results to their own patients.
REFERENCES
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
Request Permissions
If you lot wish to reuse whatever or all of this article please use the link below which volition take you to the Copyright Clearance Heart's RightsLink service. Yous will be able to get a quick price and instant permission to reuse the content in many dissimilar means.
Copyright information:
Copyright 2005 Athenaeum of Disease in Childhood
Source: https://adc.bmj.com/content/90/8/845
0 Response to "Summary of How to Read Systematic Reviews"
Post a Comment