Statistical power (1–b) relates to the ability of a study to detect an effect (or an association between two variables) when there is indeed an effect there to be detected. This is the same as the probability of rejecting the study’s null hypothesis (see hypothesis testing). A high power (i.e. a low value for b) for a study means that there is a low risk of making a Type II error, and a low power (i.e. a high value for b) means that it is more likely that a meaningful clinical difference will remain in question after the study, as the study fails to reject the possibility of no difference. Power is important in the design of comparative studies, because it is used to determine the minimum sample size required, derived from the desired power, the minimum effect size and the desired significance level, and whether it is reasonable and ethical to proceed. Conventionally a power of 80% (b = 0.2) is used: this is based more on historical precedent and pragmatic considerations rather than statistical theory. Higher values for power may be acceptable for studies other than trials with lower risks to study participants. If the statistical power of a study is low, the results of the study may be questioned because the study may be considered to have been too small to detect any differences.

How to cite: Power [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/power/