The sensitivity of a diagnostic (or screening) test indicates how often the test will give a positive result when the individual being tested indeed has the condition of interest. It is computed as the ratio of true positives (with disease, test positive) to the sum of true positives and false negatives (with disease, test negative) and is usually expressed as a percentage. Together with specificity, sensitivity is a central component of diagnostic accuracy. If there a choice of cut-off value for a diagnostic test (i.e. to give a ‘positive’ result) then it is likely that sensitivity and specificity will need to be ‘traded off’ to obtain the optimum cut-off value, depending on the seriousness of consequences for those diagnosed incorrectly with or without the condition. A very high sensitivity value may only be obtained by reducing specificity – having a larger number of false positives (those without the disease but test positive and require further investigation or treatment without being able to benefit). Sensitivity (and specificity) is also applied to the ability of literature search strategies to identify all relevant (sensitivity) and rule out irrelevant (specificity) research reports.
How to cite: Sensitivity (Diagnostic) [online]. (2016). York; York Health Economics Consortium; 2016. https://yhec.co.uk/glossary/sensitivity-diagnostic/