The effect sizes and CI obtained from PSM analysis in some studie

The effect sizes and CI obtained from PSM analysis in some studies were also extracted, and were viewed as high quality results. We also recorded quality indicators of study design including presence of appropriate controls,

covariates adjusted for in multivariate analysis, and characteristics matched in propensity score matching analysis. We contacted the authors when pertinent data were not reported in the published article (e.g. unadjusted odd ratio and 95% CI). Answer was provided by five authors.[29, 30, 34, 37, 41] When response was not provided and raw data were present in the article, manual calculations of unadjusted effect estimates for inclusion in our meta-analysis were performed. Otherwise, such analyses were excluded. We followed the Meta-analysis of Observational NVP-LDE225 in vitro Studies in Epidemiology (MOOSE)[50] guidelines for meta-analysis of studies in our data extraction, analysis, and reporting. Briefly, pooled ORs were computed

as the Mantel-Haenszel-weighted average of the ORs for all included studies. Statistical heterogeneity across studies FK866 clinical trial was tested using the Cochran Q statistic (P < 0.05) and quantified with the I2 statistic. The I2 statistic is derived from the Q statistic ([Q – df/Q] × 100), where df is degree of freedom. It describes the variation of effect estimate that is attributable to heterogeneity across studies. We pooled the results using the fixed-effects models if I2 less than 50%, or random-effects model described by DerSimonian and Laird if I2 greater than 50%.[51] Galbraith plots were used to visualize the impact of individual studies on the overall homogeneity test statistic. Meta-regression was used to evaluate the amount of heterogeneity U0126 chemical structure in the subgroup analysis. Funnel plots were used to visualize publication bias and Begg and Egger tests were

used to assess the potential publication bias.[52] In addition, we conducted pre-specified sub-group analyses to evaluate the potential effects of different methodological quality factors, adjust for covariates, and assess the robustness of our results. We examined whether effect estimates varied according to several predefined study characteristics, namely the type of operation, methodological quality, and definition of kidney injury. Statistical analyses were performed using Stata 11.0 (StataCorp, College Station, TX, USA). The metan, metabias, heterogi and metareg commands were used for meta-analytic procedures. P-values < 0.05 were considered statistically significant.

Comments are closed.