Definition
In statistics, the sensitivity of a test corresponds to its capacity to give a positive result when the hypothesis is verified.
Specificity, on the other hand, measures the ability of a test to give a negative result when the hypothesis is not verified.
These two concepts are imperative to measure the quality of a test carried out and more particularly in epidemiology.
Rating a test
To evaluate these two parameters, it is necessary to test proven cases and carry out the test on them. The results are classified into four categories:
The real positives: the number of people who are positive on the test and are sick
False positives: the number of people who are positive on the test and are not sick
False negatives: the number of people who are negative on the test and are sick
The real negatives: the number of individuals who are negative on the test and are not sick
The goal is to achieve the maximum number of true positives and true negatives and the fewest false positives and false negatives. In medicine, it is particularly important to minimize the number of false negative cases which could have significant consequences. However, checking for a number of false negatives may result in a higher number of false positives.
We can represent the results in a table like the following:
Thus, with this table, we can measure the specificity and sensitivity of a test.
Sensitivity is the probability of being tested positive when you are sick:
The specificity corresponds to the probability of being tested negative when you are not sick:
Thus, in medicine or in epidemiology, generally the objective is to have a high sensitivity.
Example
If you take a test that detects disease A. 1000 patients are tested, 500 are sick and 500 are not sick.
We perform the test and we get the following results:
We can therefore calculate the sensitivity and specificity: