When statistically testing the results of a comparative study two types of error can be made. A Type I error occurs when the null hypothesis (see hypothesis testing) is rejected although it is true (i.e. there is no difference between treatment groups). A Type II error occurs when the null hypothesis fails to be rejected by the statistical test although it is false (i.e. there is indeed a difference between treatment groups). These two concepts are linked closely to significance level (Type I) and study power (Type II). Type I (false positive) error is closely linked to significance level (a): setting a high threshold (low a) means that it is less likely that a significant result, rejecting the null hypothesis of no difference between the groups, will occur when there actually is no difference. By contrast Type II (false negative) error is closely linked to power (1–b): setting a high threshold (low b) means that it is less likely that he null hypothesis (no difference) fails to be rejected when there actually is a difference between the groups. With stochastic data it is generally not possible to eliminate both Type I and Type II error, and frequently a trade-off needs to be made between the two.

How to cite: Type I and Type II Errors [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/type-i-and-type-ii-errors/