In hypothesis testing, mention is made of Type I and Type II Errors.

A Type I error is when a Null Hypothesis is incorrectly *rejected*, and is also known as a *false positive*. In such a scenario, the p-value calculated is below the significance level (0.05 being a common value used), when in fact there is no significant effect and the p-value should have been higher.

A Type II error is when a Null Hypothesis is incorrectly *accepted*, and is also known as a *false negative*. In such a scenario, the p-value calculated is above the significance level (0.05 being a common value used), when in fact there is a significant effect and the p-value should have been below the significance level. Such an error can have more serious ramifications than a Type I error, for example, when one is screening for the presence of potentially malignant cells in a patient and the Null Hypothesis is that there are no malignant cells present.

The Power of a test is one minus the probability of a Type II error, and should ideally be one. If there are two statistical tests for testing the same Null Hypothesis, the test with greater power will yield a lower p-value, and so the chances of rejecting the Null Hypothesis for this test will be greater - in other words, the chances of incorrectly accepting the Null Hypothesis will be lower for the more powerful test.