Monday, June 05, 2006

Meta-analysis - Effect Size Calculation

Before one starts planning to combine study-results, one needs to consider whether it is appropriate to combine these studies. This is important as the studies may be so different in methodology that combining them may provide misleading or unreliable results. If all studies can’t be combined, one can further evaluate whether some of the studies, with similar methodology, can be combined. For example, it may not be appropriate to combine randomized controlled trials with trials that have no control group and compare results before and after a treatment. However, it may be appropriate to combine randomized trials only or to analyze these two different types of trials separately. If studies can’t be combined meaningfully, then one should not perform a meta-analysis and instead, should stop at systematic review of the literature.

Meta-analysis is performed in two steps or levels. First step involves calculation of an effect size for each individual study. Second step is to pool the results from individual studies to calculate an overall effect size. It is important to note from this two-step or two-level approach that in meta-analysis, data is not combined from all the trials as if they are from a single trial. In other words, one can consider meta-analysis as an example of multilevel modeling.

Selection of a summary statistic to express effect size is probably one of the most important steps in performing a meta-analysis. This selection depends on the study question and the type of data at hand. There are different summary statistics for trials with events data (binary outcome) as compared to trials that report results on other scales.

In case of binary outcomes, where there are only two possibilities (for example dead or alive, sick or healthy, etc.), multiple summary statistics are available. Most commonly used summary statistics are odds ratio, relative risk ratio, relative risk reduction, absolute risk reduction, and number needed to treat. Sometimes risk ratios are expressed as percentage; however, statistical analyses are performed on original values and not on percentage values. A summary statistic should be easy to interpret and should have a reliable variance estimate which is important in performing a meta-analysis. As number needed to treat does not have such an estimate for its variance, it is not a good choice for a summary effect. Another important point is that odds ratio and relative risk ratios are combined on a natural log scale. For a typical 2x2 table following are formulas for calculating these statistics
If outcomes are on a continuous scale, choice of a summary statistic is either mean difference (if all studies used same scale for outcome measurement) or standardized mean difference (if studies used different scales for outcome measurement). For example, change in BP in response to a certain treatment is measured on the same scale and thus the summary statistic will be mean difference. On the other hand, there are multiple scales for evaluation of depression and different studies may use different scales. In such a scenario, a standardized mean-difference will be used to summarize trial results. However, pooled summary statistics obtained from meta-analysis of trials summarized with standardized mean-difference may be difficult to interpret.

No comments: