Misunderstandings of p-values

Misunderstandings of p-values are an important problem in scientific research and scientific education. P-values are often used or interpreted incorrectly.[1] The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected (which however does not imply that the null hypothesis is false), or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). In Ronald Fisher's formulation, there is a logical disjunction: a low p-value means either that the null hypothesis is true and a highly improbable event has occurred or that the null hypothesis is false.

Common misunderstandings of p-values

The following list corrects several common misconceptions regarding p-values: [1][2][3]

  1. The p-value is not the probability that the null hypothesis is true, or the probability that the alternative hypothesis is false.[1] A p-value can indicate the degree of compatibility between a dataset and a particular hypothetical explanation (such as a null hypothesis). Specifically, the p-value can be taken as the prior probability of an observed effect given that the null hypothesis is true--which should not be confused with the posterior probability that the null hypothesis is true given the observed effect (see prosecutor's fallacy). In fact, frequentist statistics does not attach probabilities to hypotheses.
  2. The p-value is not the probability that the observed effects were produced by random chance alone.[1] The p-value is computed under the assumption that a certain model, usually the null hypothesis, is true. This means that the p-value is a statement about the relation of the data to that hypothesis, not a statement about the hypothesis itself.[1]
  3. The 0.05 significance level is merely a convention.[2] The 0.05 significance level (alpha level) is often used as the boundary between a statistically significant and a statistically non-significant p-value. However, this does not imply that there is generally a scientific reason to consider results on opposite sides of the 0.05 threshold as qualitatively different.[2])
  4. The p-value does not indicate the size or importance of the observed effect.[1] That is, a small p-value can still be observed for an effect that is not meaningful or important. In fact, the larger the sample size, the smaller the minimum effect needed to produce a statistically significant p-value (see effect size).
  5. In the absence of other evidence, the information provided by a p-value is limited. A p-value near 0.05 has been called "weak evidence" against the null hypothesis.[1]

The p-value fallacy

The p-value fallacy is a common misinterpretation of the p-value whereby a binary classification of hypotheses as true or false is made, based on whether or not the corresponding p-values are statistically significant. The term "p-value fallacy" was coined in 1999 by Steven N. Goodman.[4][5]

This fallacy is contrary to the intent of the statisticians who originally supported the use of p-values in research.[2][4] As described by Sterne and Smith, "An arbitrary division of results, into 'significant' or 'non-significant' according to the P value, was not the intention of the founders of statistical inference."[2] In contrast, common interpretations of p-values discourage the ability to distinguish statistical results from scientific conclusions, and discourage the consideration of background knowledge such as previous experimental results.[4] It has been argued that the correct use of p-values is to guide behavior, not to classify results,[6] that is, to inform a researcher's choice of which hypothesis to accept, not to provide an inference about which hypothesis is true.[4]

Representing probabilities of hypotheses

The p-value does not in itself allow reasoning about the probabilities of hypotheses, which requires multiple hypotheses or a range of hypotheses, with a prior distribution of likelihoods between them, in which case Bayesian statistics could be used. There, one uses a likelihood function for all possible values of the prior instead of the p-value for a single null hypothesis. The p-value describes a property of data when compared to a specific null hypothesis; it is not a property of the hypothesis itself. For the same reason, p-values do not give the probability that the data were produced by random chance alone.[1]

Multiple comparisons problem

See also: p-hacking and Type I error

The multiple comparisons problem occurs when one considers a set of statistical inferences simultaneously[7] or infers a subset of parameters selected based on the observed values.[8] It is also known as the look-elsewhere effect. Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis, are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a higher significance threshold for individual comparisons, so as to compensate for the number of inferences being made.

The webcomic xkcd satirized misunderstandings of p-values by portraying scientists investigating the claim that eating jellybeans caused acne.[9][10][11][12] After failing to find a significant (p < 0.05) correlation between eating jellybeans and acne, the scientists investigate 20 different colors of jellybeans individually, without adjusting for multiple comparisons. They find one color (green) nominally associated with acne (p < 0.05). The results are then reported by a newspaper as indicating that green jellybeans are linked to acne at a 95% confidence level—as if green were the only color tested. In fact, if 20 independent tests are conducted at the 0.05 significance level and all null hypotheses are true, there is a 64.2% chance of obtaining at least one false positive and the expected number of false positives is 1 (i.e. 0.05 × 20).

In general, the family-wise error rate (FWER)the probability of obtaining at least one false positiveincreases with the number of tests performed. The FWER when all null hypotheses are true for m independent tests, each conducted at significance level α, is:[11]

References

  1. 1 2 3 4 5 6 7 8 Wasserstein RL, Lazar NA (2016). "The ASA's statement on p-values: context, process, and purpose". The American Statistician. 70 (2): 129–133. doi:10.1080/00031305.2016.1154108.
  2. 1 2 3 4 5 Sterne JA, Smith GD (2001). "Sifting the evidence–what's wrong with significance tests?". BMJ. 322 (7280): 226–231. doi:10.1136/bmj.322.7280.226. PMC 1119478Freely accessible. PMID 11159626.
  3. Schervish MJ (1996). "P values: What they are and what they are not". The American Statistician. 50 (3): 203. doi:10.2307/2684655. JSTOR 2684655.
  4. 1 2 3 4 Goodman SN (1999). "Toward evidence-based medical statistics. 1: The P value fallacy". Annals of Internal Medicine. 130 (12): 995–1004. doi:10.7326/0003-4819-130-12-199906150-00008. PMID 10383371.
  5. Sellke T, Bayarri MJ, Berger JO (2001). "Calibration of ρ values for testing precise null hypotheses". The American Statistician. 55 (1): 62–71. doi:10.1198/000313001300339950.
  6. Dixon P (2003). "The p-value fallacy and how to avoid it". Canadian Journal of Experimental Psychology. 57 (3): 189–202. doi:10.1037/h0087425. PMID 14596477.
  7. Miller RG (1981). Simultaneous Statistical Inference (2nd ed.). New York: Springer Verlag. ISBN 0-387-90548-0.
  8. Benjamini Y (2010). "Simultaneous and selective inference: Current successes and future challenges". Biometrical Journal. 52 (6): 708–721. doi:10.1002/bimj.200900299. PMID 21154895.
  9. Munroe R (6 April 2011). "Significant". xkcd. Retrieved 2016-02-22.
  10. Colquhoun D (2014). "An investigation of the false discovery rate and the misinterpretation of p-values" (PDF). Royal Society Open Science. 1 (3): 140216–140216. Bibcode:2014RSOS....1n0216C. doi:10.1098/rsos.140216.
  11. 1 2 Reinhart A (2015). Statistics Done Wrong: The Woefully Complete Guide. No Starch Press. pp. 47–48. ISBN 978-1-59327-620-1.
  12. Barsalou M (2 June 2014). "Hypothesis testing and p values". Minitab blog. Retrieved 2016-02-22.

Further reading

This article is issued from Wikipedia - version of the 11/22/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.