Systematic Review Vs Meta Analysis Vs Meta Synthesis
Korean J Anesthesiol. 2018 Apr; 71(ii): 103–112.
Introduction to systematic review and meta-assay
EunJin Ahn
1Department of Anesthesiology and Pain Medicine, Inje University Seoul Paik Hospital, Seoul, Korea
Hyun Kang
2Section of Anesthesiology and Pain Medicine, Chung-Ang University College of Medicine, Seoul, Korea
Received 2017 Dec 13; Revised 2018 February 28; Accustomed 2018 Mar 14.
Abstruse
Systematic reviews and meta-analyses present results past combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses accept been actively performed in various fields including anesthesiology. These research methods are powerful tools that can overcome the difficulties in performing large-calibration randomized controlled trials. All the same, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. Nonetheless, accepting the conclusions of many studies without agreement the meta-analysis can be unsafe. Therefore, this commodity provides an piece of cake introduction to clinicians on performing and agreement meta-analyses.
Keywords: Anesthesiology, Meta-analysis, Randomized controlled trial, Systematic review
Introduction
A systematic review collects all possible studies related to a given topic and blueprint, and reviews and analyzes their results [one]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the written report results is conducted on the footing of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Usually, in order to obtain more than reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which have a high level of testify [2] (Fig. i). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Post-obit the Quality of Reporting of Meta-analyses (QUORUM) statement [3], and the appearance of registers such as Cochrane Library'south Methodology Register, a large number of systematic literature reviews have been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [4] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [5].
In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not only perioperative management just also intensive care and outpatient anesthesia [half dozen–thirteen]. Systematic reviews and meta-analyses include various topics, such as comparison diverse treatments of postoperative nausea and vomiting [14,15], comparison general anesthesia and regional anesthesia [16–18], comparing airway maintenance devices [8,19], comparison various methods of postoperative pain command (e.thou., patient-controlled analgesia pumps, nerve block, or analgesics) [20–23], comparison the precision of diverse monitoring instruments [vii], and meta-analysis of dose-response in various drugs [12].
Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to assistance better extract accurate, expert quality data from the flood of data being produced. All the same, a lack of understanding about systematic reviews and meta-analyses can lead to incorrect outcomes existence derived from the review and assay processes. If readers indiscriminately accept the results of the many meta-analyses that are published, wrong data may be obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a way that is like shooting fish in a barrel to understand for future authors and readers of systematic review and meta-analysis.
Report Planning
Information technology is like shooting fish in a barrel to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to notice answers to a certain inquiry question, by collecting all bachelor studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from two or more different studies to grade a pooled estimate [i]. Following a systematic review, if it is not possible to grade a pooled gauge, information technology tin exist published every bit is without progressing to a meta-analysis; however, if it is possible to form a pooled estimate from the extracted data, a meta-assay tin be attempted. Systematic reviews and meta-analyses usually proceed according to the flowchart presented in Fig. 2. We explain each of the stages beneath.
Formulating research questions
A systematic review attempts to gather all available empirical research by using conspicuously defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Here, the definition of the word "similar" is not made articulate, merely when selecting a topic for the meta-assay, it is essential to ensure that the different studies present information that can be combined. If the studies contain data on the same topic that can be combined, a meta-analysis can fifty-fifty be performed using data from only two studies. However, written report pick via a systematic review is a precondition for performing a meta-analysis, and it is important to clearly define the Population, Intervention, Comparison, Outcomes (PICO) parameters that are fundamental to evidence-based research. In addition, selection of the research topic is based on logical evidence, and information technology is important to select a topic that is familiar to readers without conspicuously confirmed the evidence [24].
Protocols and registration
In systematic reviews, prior registration of a detailed inquiry program is very important. In club to make the research process transparent, principal/secondary outcomes and methods are ready in advance, and in the upshot of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organisation like PROSPERO (http://www.crd.york.ac.uk/PROSPERO/), and the registration number is recorded when reporting the study, in order to share the protocol at the time of planning.
Defining inclusion and exclusion criteria
Information is included on the study design, patient characteristics, publication status (published or unpublished), language used, and research menses. If there is a discrepancy betwixt the number of patients included in the written report and the number of patients included in the analysis, this needs to exist conspicuously explained while describing the patient characteristics, to avert disruptive the reader.
Literature search and written report selection
In order to secure proper basis for bear witness-based research, it is essential to perform a broad search that includes as many studies as possible that run into the inclusion and exclusion criteria. Typically, the iii bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not only published studies but also abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that encounter the inclusion/exclusion criteria based on the abstracts, and then brand the final selection of studies based on their full text. In order to maintain transparency and objectivity throughout this process, written report selection is conducted independently by at least ii investigators. When there is a inconsistency in opinions, intervention is required via fence or by a third reviewer. The methods for this process also demand to be planned in advance. Information technology is essential to ensure the reproducibility of the literature selection process [25].
Quality of bear witness
However, well planned the systematic review or meta-assay is, if the quality of evidence in the studies is low, the quality of the meta-assay decreases and incorrect results can be obtained [26]. Even when using randomized studies with a high quality of evidence, evaluating the quality of evidence precisely helps decide the strength of recommendations in the meta-analysis. One method of evaluating the quality of testify in non-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Constitute 1) . Still, nosotros are mostly focusing on meta-analyses that use randomized studies.
If the Grading of Recommendations, Assessment, Development and Evaluations (Class) system (http://www.gradeworkinggroup.org/) is used, the quality of evidence is evaluated on the basis of the study limitations, inaccuracies, incompleteness of issue information, indirectness of evidence, and risk of publication bias, and this is used to determine the forcefulness of recommendations [27]. As shown in Table 1, the study limitations are evaluated using the "adventure of bias" method proposed by Cochrane 2) . This method classifies bias in randomized studies every bit "depression," "high," or "unclear" on the basis of the presence or absence of six processes (random sequence generation, allotment concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [28].
Table 1.
Domain | Back up of sentence | Review author'south judgement |
---|---|---|
Sequence generation | Describe the method used to generate the allocation sequence in sufficient detail to allow for an assessment of whether it should produce comparable groups. | Pick bias (biased allocation to interventions) due to inadequate generation of a randomized sequence. |
Allocation concealment | Describe the method used to muffle the allocation sequence in sufficient item to decide whether intervention allocations could have been foreseen in advance of, or during, enrollment. | Pick bias (biased allocation to interventions) due to inadequate concealment of allocations prior to assignment. |
Blinding | Draw all measures used, if any, to bullheaded study participants and personnel from knowledge of which intervention a participant received. | Performance bias due to noesis of the allocated interventions by participants and personnel during the written report. |
Describe all measures used, if any, to blind study issue assessors from noesis of which intervention a participant received. | Detection bias due to noesis of the allocated interventions by outcome assessors. | |
Incomplete result information | Describe the completeness of consequence data for each chief outcome, including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group, reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed past the review authors. | Attrition bias due to amount, nature, or handling of incomplete upshot information. |
Selective reporting | State how the possibility of selective outcome reporting was examined by the review authors, and what was found. | Reporting bias due to selective outcome reporting. |
Other bias | State any of import concerns nigh bias non addressed in the other domains in the tool. | Bias due to problems not covered elsewhere in the table. |
If item questions/entries were prespecified in the reviews protocol, responses should be provided for each question/entry. |
Data extraction
Two different investigators extract data based on the objectives and class of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are unlike, the size and format of the outcomes are also dissimilar, and slight changes may be required when combining the information [29]. If there are differences in the size and format of the event variables that cause difficulties combining the data, such as the use of different evaluation instruments or dissimilar evaluation timepoints, the analysis may be limited to a systematic review. The investigators resolve differences of stance by argue, and if they fail to reach a consensus, a 3rd-reviewer is consulted.
Information Analysis
The aim of a meta-analysis is to derive a decision with increased power and accurateness than what could not exist able to achieve in private studies. Therefore, earlier assay, it is crucial to evaluate the direction of consequence, size of effect, homogeneity of effects among studies, and strength of evidence [xxx]. Thereafter, the information are reviewed qualitatively and quantitatively. If it is determined that the different research outcomes cannot be combined, all the results and characteristics of the private studies are displayed in a table or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated by computing the weighted pooled judge for the interventions in at least 2 separate studies.
The pooled judge is the upshot of the meta-analysis, and is typically explained using a forest plot (Figs. 3 and 4). The black squares in the forest plot are the odds ratios (ORs) and 95% conviction intervals in each study. The area of the squares represents the weight reflected in the meta-analysis. The blackness diamond represents the OR and 95% confidence interval calculated across all the included studies. The assuming vertical line represents a lack of therapeutic result (OR = i); if the confidence interval includes OR = 1, it means no pregnant departure was establish betwixt the treatment and control groups.
Dichotomous variables and continuous variables
In data analysis, effect variables can be considered broadly in terms of dichotomous variables and continuous variables. When combining information from continuous variables, the hateful difference (MD) and standardized mean difference (SMD) are used (Table 2).
Table 2.
Type of data | Effect mensurate | Fixed-effect methods | Random-effect methods |
---|---|---|---|
Dichotomous | Odds ratio (OR) | Mantel-Haenszel (M-H) | Mantel-Haenszel (M-H) |
Changed variance (IV) | Inverse variance (Four) | ||
Peto | |||
Risk ratio (RR), | Mantel-Haenszel (G-H) | Mantel-Haenszel (M-H) | |
Risk departure (RD) | Inverse variance (IV) | Changed variance (4) | |
Continuous | Mean difference (MD), Standardized mean deviation (SMD) | Inverse variance (Four) | Inverse variance (IV) |
The Doc is the absolute divergence in mean values between the groups, and the SMD is the hateful difference between groups divided by the standard deviation. When results are presented in the same units, the Physician can be used, but when results are presented in dissimilar units, the SMD should exist used. When the MD is used, the combined units must be shown. A value of "0" for the MD or SMD indicates that the effects of the new treatment method and the existing handling method are the aforementioned. A value lower than "0" ways the new treatment method is less constructive than the existing method, and a value greater than "0" means the new treatment is more effective than the existing method.
When combining data for dichotomous variables, the OR, risk ratio (RR), or risk deviation (RD) can be used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other case-control studies or cantankerous-exclusive studies. However, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the upshot variable is a dichotomous variable, information technology tin can be presented as the number needed to treat (NNT), which is the minimum number of patients who need to be treated in the intervention group, compared to the control group, for a given effect to occur in at least i patient. Based on Table 3, in an RCT, if x is the probability of the result occurring in the control grouping and y is the probability of the consequence occurring in the intervention group, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = x − y. NNT can be obtained as the reciprocal, 1/ARR.
Table 3.
Issue occurred | Event not occurred | Sum | |
---|---|---|---|
Intervention | A | B | a + b |
Control | C | D | c + d |
Fixed-outcome models and random-issue models
In lodge to analyze effect size, two types of models tin can be used: a fixed-consequence model or a random-result model. A fixed-event model assumes that the event of treatment is the aforementioned, and that variation between results in different studies is due to random error. Thus, a stock-still-effect model can be used when the studies are considered to have the same design and methodology, or when the variability in results inside a study is small, and the variance is thought to be due to random fault. Three mutual methods are used for weighted estimation in a stock-still-effect model: i) changed variance-weighted estimation 3) , two) Mantel-Haenszel estimation 4) , and 3) Peto estimation 5) .
A random-effect model assumes heterogeneity between the studies being combined, and these models are used when the studies are assumed unlike, even if a heterogeneity test does not show a significant result. Unlike a fixed-effect model, a random-issue model assumes that the size of the issue of treatment differs amongst studies. Thus, differences in variation among studies are thought to exist due to non only random error but as well betwixt-written report variability in results. Therefore, weight does non decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method vi) is mostly used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, every bit with fixed-effect models. These four methods are all used in Review Manager software (The Cochrane Collaboration, Great britain), and are described in a study by Deeks et al. [31] (Table 2). However, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) can amend reduce the take chances of type 1 error than does the DerSimonian and Laird method [32].
Fig. iii shows the results of analyzing outcome information using a stock-still-effect model (A) and a random-effect model (B). As shown in Fig. 3, while the results from big studies are weighted more heavily in the fixed-effect model, studies are given relatively like weights irrespective of written report size in the random-effect model. Although identical data were existence analyzed, equally shown in Fig. 3, the pregnant result in the fixed-effect model was no longer significant in the random-effect model. Ane representative example of the modest written report effect in a random-effect model is the meta-assay by Li et al. [33]. In a large-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, but in the random-consequence model, which included numerous small studies, the small-scale study outcome resulted in an clan being constitute between intravenous injection of magnesium and myocardial infarction. This small report effect can be controlled for by using a sensitivity analysis, which is performed to examine the contribution of each of the included studies to the final meta-analysis consequence. In particular, when heterogeneity is suspected in the study methods or results, by irresolute sure data or analytical methods, this method makes information technology possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [34].
Heterogeneity
Homogeneity test is a method whether the caste of heterogeneity is greater than would be expected to occur naturally when the consequence size calculated from several studies is higher than the sampling fault. This makes it possible to test whether the effect size calculated from several studies is the aforementioned. Three types of homogeneity tests can be used: 1) forest plot, 2) Cochrane's Q test (chi-squared), and 3) Higgins I2 statistics. In the forest plot, every bit shown in Fig. 4, greater overlap betwixt the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared examination, calculated from the forest plot in Fig. iv, is less than 0.1, information technology is considered to show statistical heterogeneity and a random-upshot can be used. Finally, I2 can be used [35].
I ii = 100% × (Q -d f)/Q Q:c h i -southward q u a r e dsouth t a t i s t i c d f:d e g r due east eo ff r e e d o chiliado fQs t a t i southward t i c
I2 , calculated as shown above, returns a value between 0 and 100%. A value less than 25% is considered to testify strong homogeneity, a value of 50% is average, and a value greater than 75% indicates strong heterogeneity.
Even when the information cannot exist shown to be homogeneous, a stock-still-effect model can be used, ignoring the heterogeneity, and all the study results can be presented individually, without combining them. Nonetheless, in many cases, a random-effect model is applied, as described above, and a subgroup analysis or meta-regression analysis is performed to explain the heterogeneity. In a subgroup assay, the information are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to be planned in the predetermined protocol earlier starting the meta-analysis. A meta-regression analysis is similar to a normal regression analysis, except that the heterogeneity betwixt studies is modeled. This process involves performing a regression analysis of the pooled approximate for covariance at the study level, and and so information technology is usually not considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.
Publication bias
Publication bias is the nigh common type of reporting bias in meta-analyses. This refers to the distortion of meta-analysis outcomes due to the higher likelihood of publication of statistically meaning studies rather than non-significant studies. In club to examination the presence or absence of publication bias, first, a funnel plot tin can be used (Fig. v). Studies are plotted on a scatter plot with consequence size on the ten-axis and precision or total sample size on the y-axis. If the points form an upside-downward funnel shape, with a broad base that narrows towards the top of the plot, this indicates the absence of a publication bias (Fig. 5A) [29,36]. On the other hand, if the plot shows an asymmetric shape, with no points on one side of the graph, so publication bias tin can be suspected (Fig. 5B). Second, to exam publication bias statistically, Begg and Mazumdar'southward rank correlation examination 8) [37] or Egger's exam ix) [29] can exist used. If publication bias is detected, the trim-and-make full method 10) can exist used to correct the bias [38]. Fig. half dozen displays results that show publication bias in Egger's test, which has and so been corrected using the trim-and-fill method using Comprehensive Meta-Analysis software (Biostat, USA).
Issue Presentation
When reporting the results of a systematic review or meta-assay, the analytical content and methods should exist described in detail. Get-go, a flowchart is displayed with the literature search and selection process co-ordinate to the inclusion/exclusion criteria. Second, a tabular array is shown with the characteristics of the included studies. A table should also be included with information related to the quality of prove, such as Class (Tabular array 4). Third, the results of information assay are shown in a forest plot and funnel plot. Fourth, if the results use dichotomous information, the NNT values can exist reported, as described above.
Tabular array 4.
Quality assessment | Number of patients | Upshot | Quality | Importance | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Northward | ROB | Inconsistency | Indirectness | Imprecision | Others | Palonosetron (%) | Ramosetron (%) | RR (CI) | |||
PON | 6 | Serious | Serious | Not serious | Non serious | None | 81/304 (26.6) | eighty/305 (26.2) | 0.92 (0.54 to one.58) | Very low | Important |
POV | v | Serious | Serious | Non serious | Not serious | None | 55/274 (20.1) | threescore/275 (21.8) | 0.87 (0.48 to one.57) | Very low | Of import |
PONV | 3 | Not serious | Serious | Not serious | Not serious | None | 108/184 (58.vii) | 107/186 (57.5) | 0.92 (0.54 to 1.58) | Low | Important |
When Review Manager software (The Cochrane Collaboration, United kingdom of great britain and northern ireland) is used for the analysis, 2 types of P values are given. The first is the P value from the z-test, which tests the null hypothesis that the intervention has no effect. The 2nd P value is from the chi-squared examination, which tests the null hypothesis for a lack of heterogeneity. The statistical result for the intervention effect, which is generally considered the most of import consequence in meta-analyses, is the z-test P value.
A common error when reporting results is, given a z-test P value greater than 0.05, to say there was "no statistical significance" or "no difference." When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 can be explained equally "a significant difference in the effects of the two treatment methods." All the same, the P value may appear non-significant whether or not at that place is a difference betwixt the two treatment methods. In such a situation, it is meliorate to announce "there was no strong testify for an effect," and to nowadays the P value and confidence intervals. Another mutual mistake is to recollect that a smaller P value is indicative of a more meaning result. In meta-analyses of large-calibration studies, the P value is more profoundly affected by the number of studies and patients included, rather than by the significance of the results; therefore, intendance should be taken when interpreting the results of a meta-analysis.
Conclusion
When performing a systematic literature review or meta-assay, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results tin be biased and the outcomes can exist incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could ordinarily only exist achieved using big-scale RCTs, which are difficult to perform in private studies. As our understanding of evidence-based medicine increases and its importance is meliorate appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate acceptance of the results of all these meta-analyses tin exist dangerous, and hence, we recommend that their results be received critically on the basis of a more accurate understanding.
Footnotes
1)http://world wide web.ohri.ca.
2)http://methods.cochrane.org/bias/assessing-take chances-bias-included-studies.
iii)The inverse variance-weighted interpretation method is useful if the number of studies is modest with large sample sizes.
four)The Mantel-Haenszel estimation method is useful if the number of studies is big with small sample sizes.
5)The Peto interpretation method is useful if the effect rate is depression or 1 of the two groups shows zero incidence.
6)The virtually popular and simplest statistical method used in Review Managing director and Comprehensive Meta-assay software.
7)Alternative random-effect model meta-assay that has more adequate mistake rates than does the common DerSimonian and Laird method, especially when the number of studies is small. Withal, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than v studies with very unequal sizes, actress caution is needed.
8)The Begg and Mazumdar rank correlation test uses the correlation between the ranks of effect sizes and the ranks of their variances [37].
9)The degree of funnel plot disproportion as measured past the intercept from the regression of standard normal deviates confronting precision [29].
10)If there are more small studies on i side, we expect the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by calculation the original studies dorsum into the analysis as a mirror image of each study.
References
1. Kang H. Statistical considerations in meta-analysis. Hanyang Med Rev. 2015;35:23–32. [Google Scholar]
two. Uetani 1000, Nakayama T, Ikai H, Yonemoto North, Moher D. Quality of reports on randomized controlled trials conducted in Japan: evaluation of adherence to the Consort statement. Intern Med. 2009;48:307–13. [PubMed] [Google Scholar]
iii. Moher D, Cook DJ, Eastwood Southward, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–900. [PubMed] [Google Scholar]
iv. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62:e1–34. [PubMed] [Google Scholar]
five. Willis BH, Quigley Chiliad. The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review. BMC Med Res Methodol. 2011;eleven:163. [PMC complimentary commodity] [PubMed] [Google Scholar]
half dozen. Chebbout R, Heywood EG, Drake TM, Wild JR, Lee J, Wilson M, et al. A systematic review of the incidence of and risk factors for postoperative atrial fibrillation following full general surgery. Anaesthesia. 2018;73:490–8. [PubMed] [Google Scholar]
7. Chiang MH, Wu SC, Hsu SW, Mentum JC. Bispectral Index and non-Bispectral Index anesthetic protocols on postoperative recovery outcomes. Minerva Anestesiol. 2018;84:216–28. [PubMed] [Google Scholar]
8. Damodaran Due south, Sethi S, Malhotra SK, Samra T, Maitra S, Saini Five. Comparison of oropharyngeal leak pressure of air-Q, i-gel, and laryngeal mask airway supreme in adult patients during full general anesthesia: A randomized controlled trial. Saudi J Anaesth. 2017;11:390–5. [PMC free article] [PubMed] [Google Scholar]
ix. Kim MS, Park JH, Choi YS, Park SH, Shin S. Efficacy of palonosetron vs. ramosetron for the prevention of postoperative nausea and vomiting: a meta-analysis of randomized controlled trials. Yonsei Med J. 2017;58:848–58. [PMC gratis article] [PubMed] [Google Scholar]
10. Lam T, Nagappa M, Wong J, Singh 1000, Wong D, Chung F. Continuous pulse oximetry and capnography monitoring for postoperative respiratory low and adverse events: a systematic review and meta-analysis. Anesth Analg. 2017;125:2019–29. [PubMed] [Google Scholar]
11. Landoni G, Biondi-Zoccai GG, Zangrillo A, Bignami E, D'Avolio S, Marchetti C, et al. Desflurane and sevoflurane in cardiac surgery: a meta-analysis of randomized clinical trials. J Cardiothorac Vasc Anesth. 2007;21:502–eleven. [PubMed] [Google Scholar]
12. Lee A, Ngan Kee WD, Gin T. A dose-response meta-analysis of condom intravenous ephedrine for the prevention of hypotension during spinal anesthesia for elective cesarean commitment. Anesth Analg. 2004;98:483–90. [PubMed] [Google Scholar]
13. Xia ZQ, Chen SQ, Yao X, Xie CB, Wen SH, Liu KX. Clinical benefits of dexmedetomidine versus propofol in adult intensive care unit patients: a meta-analysis of randomized clinical trials. J Surg Res. 2013;185:833–43. [PubMed] [Google Scholar]
xiv. Ahn Due east, Choi G, Kang H, Baek C, Jung Y, Woo Y, et al. Palonosetron and ramosetron compared for effectiveness in preventing postoperative nausea and vomiting: a systematic review and meta-analysis. PLoS One. 2016;eleven:e0168509. [PMC gratuitous article] [PubMed] [Google Scholar]
fifteen. Ahn EJ, Kang H, Choi GJ, Baek CW, Jung YH, Woo YC. The effectiveness of midazolam for preventing postoperative nausea and vomiting: a systematic review and meta-analysis. Anesth Analg. 2016;122:664–76. [PubMed] [Google Scholar]
16. Yeung J, Patel V, Champaneria R, Dretzke J. Regional versus full general amazement in elderly patients undergoing surgery for hip fracture: protocol for a systematic review. Syst Rev. 2016;5:66. [PMC free commodity] [PubMed] [Google Scholar]
17. Zorrilla-Vaca A, Healy RJ, Mirski MA. A comparison of regional versus general anesthesia for lumbarspine surgery: a meta-analysis of randomized studies. J Neurosurg Anesthesiol. 2017;29:415–25. [PubMed] [Google Scholar]
eighteen. Zuo D, Jin C, Shan M, Zhou L, Li Y. A comparison of general versus regional anesthesia for hip fracture surgery: a meta-analysis. Int J Clin Exp Med. 2015;8:20295–301. [PMC free article] [PubMed] [Google Scholar]
19. Ahn EJ, Choi GJ, Kang H, Baek CW, Jung YH, Woo YC, et al. Comparative efficacy of the air-q intubating laryngeal airway during general anesthesia in pediatric patients: a systematic review and meta-analysis. Biomed Res Int. 2016;2016:6406391. [PMC free commodity] [PubMed] [Google Scholar]
20. Kirkham KR, Grape Due south, Martin R, Albrecht East. Analgesic efficacy of local infiltration analgesia vs. femoral nervus block after anterior cruciate ligament reconstruction: a systematic review and meta-analysis. Amazement. 2017;72:1542–53. [PubMed] [Google Scholar]
21. Tang Y, Tang 10, Wei Q, Zhang H. Intrathecal morphine versus femoral nervus block for pain control after total knee arthroplasty: a metaanalysis. J Orthop Surg Res. 2017;12:125. [PMC costless article] [PubMed] [Google Scholar]
22. Hussain North, Goldar G, Ragina N, Banfield L, Laffey JG, Abdallah FW. Suprascapular and interscalene nerve block for shoulder surgery: a systematic review and meta-analysis. Anesthesiology. 2017;127:998–1013. [PubMed] [Google Scholar]
23. Wang K, Zhang HX. Liposomal bupivacaine versus interscalene nerve block for pain control after total shoulder arthroplasty: A systematic review and meta-assay. Int J Surg. 2017;46:61–70. [PubMed] [Google Scholar]
24. Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart Grand, et al. Preferred reporting items for systematic review and meta-analyses of private participant data: the PRISMA-IPD Argument. JAMA. 2015;313:1657–65. [PubMed] [Google Scholar]
25. Kang H. How to understand and conduct evidence-based medicine. Korean J Anesthesiol. 2016;69:435–45. [PMC costless article] [PubMed] [Google Scholar]
26. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of testify and force of recommendations. BMJ. 2008;336:924–half-dozen. [PMC complimentary article] [PubMed] [Google Scholar]
27. Dijkers Grand. Introducing Grade: a systematic arroyo to rating show in systematic reviews and to guideline development. Knowl Translat Update. 2013;1:1–9. [Google Scholar]
28. Higgins JP, Altman DG, Sterne JA. Chapter viii: Assessing the hazard of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec 13. Available from http://handbook.cochrane.org.
29. Egger Thousand, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ. 1998;316:140–four. [PMC free commodity] [PubMed] [Google Scholar]
xxx. Higgins JP, Altman DG, Sterne JA. Chapter ix: Assessing the hazard of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions: The Cochrane Collaboration 2011. updated 2017 Jun. cited 2017 Dec 13. Bachelor from http://handbook.cochrane.org.
31. Deeks JJ, Altman DG, Bradburn MJ. Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. In: Systematic Reviews in Health Care. In: Egger M, Smith GD, Altman DG, editors. London: BMJ Publishing Group; 2008. pp. 285–312. [Google Scholar]
32. IntHout J, Ioannidis JP, Borm GF. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-assay is straightforward and considerably outperforms the standard DerSimonian-Laird method. BMC Med Res Methodol. 2014;14:25. [PMC free article] [PubMed] [Google Scholar]
33. Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database Syst Rev. 2007;(2):CD002755. [PMC free article] [PubMed] [Google Scholar]
34. Thompson SG. Controversies in meta-analysis: the instance of the trials of serum cholesterol reduction. Stat Methods Med Res. 1993;2:173–92. [PubMed] [Google Scholar]
35. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60. [PMC free article] [PubMed] [Google Scholar]
36. Sutton AJ, Abrams KR, Jones DR. An illustrated guide to the methods of meta-analysis. J Eval Clin Pract. 2001;seven:135–48. [PubMed] [Google Scholar]
37. Begg CB, Mazumdar M. Operating characteristics of a rank correlation test for publication bias. Biometrics. 1994;50:1088–101. [PubMed] [Google Scholar]
38. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-assay. Biometrics. 2000;56:455–63. [PubMed] [Google Scholar]
Articles from Korean Periodical of Anesthesiology are provided here courtesy of Korean Club of Anesthesiologists
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5903119/
0 Response to "Systematic Review Vs Meta Analysis Vs Meta Synthesis"
Post a Comment