请输入您要查询的百科知识:

 

词条 Multiple comparisons problem
释义

  1. History

  2. Definition

     Classification of multiple hypothesis tests{{anchor|Classification of m hypothesis tests}} 

  3. Controlling procedures

  4. Large-scale multiple testing

     Assessing whether any alternative hypotheses are true 

  5. See also

  6. References

  7. Further reading

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously[1] or infers a subset of parameters selected based on the observed values.[2] In certain fields it is known as the look-elsewhere effect.

The more inferences are made, the more likely erroneous inferences are to occur. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a stricter significance threshold for individual comparisons, so as to compensate for the number of inferences being made.

History

The interest in the problem of multiple comparisons began in the 1950s with the work of Tukey and Scheffé. Other methods, such as the closed testing procedure (Marcus et al., 1976) and the Holm–Bonferroni method (1979), later emerged. In 1995, work on the false discovery rate began. In 1996, the first conference on multiple comparisons took place in Israel. This was followed by conferences around the world, usually taking place about every two years.[3]

Definition

Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery." A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests.[4] Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:

  • Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes increasingly likely that the treatment and control groups will appear to differ on at least one attribute due to random sampling error alone.
  • Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes increasingly likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.

In both examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.

For example, if one test is performed at the 5% level and the corresponding null hypothesis is true, there is only a 5% chance of incorrectly rejecting the null hypothesis. However, if 100 tests are conducted and all corresponding null hypotheses are true, the expected number of incorrect rejections (also known as false positives or Type I errors) is 5. If the tests are statistically independent from each other, the probability of at least one incorrect rejection is 99.4%.

The multiple comparisons problem also applies to confidence intervals. A single confidence interval with a 95% coverage probability level will contain the population parameter in 95% of experiments. However, if one considers 100 confidence intervals simultaneously, each with 95% coverage probability, the expected number of non-covering intervals is 5. If the intervals are statistically independent from each other, the probability that at least one interval does not contain the population parameter is 99.4%.

Techniques have been developed to prevent the inflation of false positive rates and non-coverage rates that occur with multiple statistical tests.

Classification of multiple hypothesis tests{{anchor|Classification of m hypothesis tests}}

{{Classification of multiple hypothesis tests}}

Controlling procedures

{{further information|Family-wise error rate#Controlling procedures}}{{see also|False coverage rate#Controlling procedures|False discovery rate#Controlling procedures}}

If m independent comparisons are performed, the family-wise error rate (FWER), is given by

Hence, unless the tests are perfectly positively dependent (i.e., identical), increases as the number of comparisons increases.

If we do not assume that the comparisons are independent, then we can still say:

which follows from Boole's inequality. Example:

There are different ways to assure that the family-wise error rate is at most . The most conservative method, which is free of dependence and distributional assumptions, is the Bonferroni correction .

A marginally less conservative correction can be obtained by solving the equation for the family-wise error rate of independent comparisons for . This yields , which is known as the Šidák correction. Another procedure is the Holm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the lowest p-value () against the strictest criterion, and the higher p-values () against progressively less strict criteria.[5]

.

{{anchor|Correction}}{{cleanup merge|section=y|Multiple testing correction|date=April 2016}}

Multiple testing correction refers to re-calculating probabilities obtained from a statistical test which was repeated multiple times. In order to retain a prescribed family-wise error rate α in an analysis involving more than one comparison, the error rate for each comparison must be more stringent than α. Boole's inequality implies that if each of m tests is performed to have type I error rate α/m, the total error rate will not exceed α. This is called the Bonferroni correction, and is one of the most commonly used approaches for multiple comparisons.

In some situations, the Bonferroni correction is substantially conservative, i.e., the actual family-wise error rate is much less than the prescribed level α. This occurs when the test statistics are highly dependent (in the extreme case where the tests are perfectly dependent, the family-wise error rate with no multiple comparisons adjustment and the per-test error rates are identical). For example, in fMRI analysis,[6][7] tests are done on over 100,000 voxels in the brain. The Bonferroni method would require p-values to be smaller than .05/100000 to declare significance. Since adjacent voxels tend to be highly correlated, this threshold is generally too stringent.

Because simple techniques such as the Bonferroni method can be conservative, there has been a great deal of attention paid to developing better techniques, such that the overall rate of false positives can be maintained without excessively inflating the rate of false negatives. Such methods can be divided into general categories:

  • Methods where total alpha can be proved to never exceed 0.05 (or some other chosen value) under any conditions. These methods provide "strong" control against Type I error, in all conditions including a partially correct null hypothesis.
  • Methods where total alpha can be proved not to exceed 0.05 except under certain defined conditions.
  • Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA, MANOVA, or Tukey's range test. These methods generally provide only "weak" control of Type I error, except for certain numbers of hypotheses.
  • Empirical methods, which control the proportion of Type I errors adaptively, utilizing correlation and distribution characteristics of the observed data.

The advent of computerized resampling methods, such as bootstrapping and Monte Carlo simulations, has given rise to many techniques in the latter category. In some cases where exhaustive permutation resampling is performed, these tests provide exact, strong control of Type I error rates; in other cases, such as bootstrap sampling, they provide only approximate control.

Large-scale multiple testing

Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in an analysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, in genomics, when using technologies such as microarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field of genetic association studies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.[8]

In different branches of science, multiple testing is handled in different ways. It has been argued that if statistical tests are only performed when there is a strong basis for expecting the result to be true, multiple comparisons adjustments are not necessary.[9] It has also been argued that use of multiple testing corrections is an inefficient way to perform empirical research, since multiple testing adjustments control false positives at the potential expense of many more false negatives. On the other hand, it has been argued that advances in measurement and information technology have made it far easier to generate large datasets for exploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very high false positive rates are expected unless multiple comparisons adjustments are made.

For large-scale testing problems where the goal is to provide definitive results, the familywise error rate remains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of the false discovery rate (FDR)[10][11][12] is often preferred. The FDR, loosely defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives" that can be more rigorously evaluated in a follow-up study.[13]

The practice of trying many unadjusted comparisons in the hope of finding a significant one is a known problem, whether applied unintentionally or deliberately, is sometimes called "p-hacking."[14][15]

Assessing whether any alternative hypotheses are true

A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true.{{citation needed|date=June 2016}} One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use the Poisson distribution as a model for the number of significant results at a given level α that would be found when all null hypotheses are true.{{citation needed|date=June 2016}} If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results.{{citation needed|date=June 2016}} For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 0.05 × 1000 = 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if more than 61 significant results are observed, it is very likely that some of them correspond to situations where the alternative hypothesis holds.{{citation needed|date=June 2016}} A drawback of this approach is that it over-states the evidence that some of the alternative hypotheses are true when the test statistics are positively correlated, which commonly occurs in practice. {{citation needed|date=August 2012}}. On the other hand, the approach remains valid even in the presence of correlation among the test statistics, as long as the Poisson distribution can be shown to provide a good approximation for the number of significant results. This scenario arises, for instance, when mining significant frequent itemsets from transactional datasets. Furthermore, a careful two stage analysis can bound the FDR at a pre-specified level.[16]

Another common approach that can be used in situations where the test statistics can be standardized to Z-scores is to make a normal quantile plot of the test statistics. If the observed quantiles are markedly more dispersed than the normal quantiles, this suggests that some of the significant results may be true positives.{{citation needed|date=January 2012}}

See also

Key concepts
  • Familywise error rate
  • False positive rate
  • False discovery rate (FDR)
  • False coverage rate (FCR)
  • Interval estimation
  • Post-hoc analysis
  • Experimentwise error rate
General methods of alpha adjustment for multiple comparisons
  • Closed testing procedure
  • Bonferroni correction
  • Boole–Bonferroni bound
  • Holm–Bonferroni method
Related concepts
  • Testing hypotheses suggested by the data
  • Texas sharpshooter fallacy
  • Model selection
  • Look-elsewhere effect

References

1. ^{{cite book | last=Miller | first=R.G. | year=1981 | title=Simultaneous Statistical Inference 2nd Ed | publisher=Springer Verlag New York | isbn=978-0-387-90548-8}}
2. ^{{cite journal | journal=Biometrical Journal | title=Simultaneous and selective inference: Current successes and future challenges | year=2010 | volume=52 | last=Benjamini | first=Y. | pages=708–721 | doi=10.1002/bimj.200900299 | issue=6 | pmid=21154895}}
3. ^ 
4. ^{{cite book |last1=Kutner |first1=Michael |last2=Nachtsheim |first2=Christopher |last3=Neter |first3=John |authorlink3=John Neter |last4=Li |first4=William |date=2005 |title=Applied Linear Statistical Models |pages=744–745}}
5. ^{{cite journal | last1 = Aickin | first1 = M | last2 = Gensler | first2 = H | title = Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods | url = | journal = Am J Public Health | volume = 86| pages = 726–728 | doi=10.2105/ajph.86.5.726 | pmid=8629727 | date=May 1996 | pmc=1380484 | issue=5}}
6. ^{{Cite journal | last1 = Logan | first1 = B. R. | last2 = Rowe | first2 = D. B. | title = An evaluation of thresholding techniques in fMRI analysis | journal = NeuroImage | volume = 22 | issue = 1 | pages = 95–108 | year = 2004 | pmid = 15110000 | doi = 10.1016/j.neuroimage.2003.12.047| citeseerx = 10.1.1.10.421 }}
7. ^{{Cite journal | last1 = Logan | first1 = B. R. | last2 = Geliazkova | first2 = M. P. | last3 = Rowe | first3 = D. B. | doi = 10.1002/hbm.20471 | title = An evaluation of spatial thresholding techniques in fMRI analysis | journal = Human Brain Mapping | volume = 29 | issue = 12 | pages = 1379–1389 | year = 2008 | pmid = 18064589| pmc = }}
8. ^{{Cite journal|last=Qu|first=Hui-Qi|last2=Tien|first2=Matthew|last3=Polychronakos|first3=Constantin|date=2010-10-01|title=Statistical significance in genetic association studies|journal=Clinical and Investigative Medicine. Medecine Clinique et Experimentale|volume=33|issue=5|pages=E266–E270|issn=0147-958X|pmc=3270946|pmid=20926032}}
9. ^{{cite journal | doi=10.1097/00001648-199001000-00010 | last=Rothman | first=Kenneth J. | journal=Epidemiology | volume=1 | pages=43–46 | year=1990 | title=No Adjustments Are Needed for Multiple Comparisons | issue=1 | pmid=2081237 | jstor=20065622}}
10. ^{{cite journal | last=Benjamini | first=Yoav |author2=Hochberg, Yosef | year=1995 | title=Controlling the false discovery rate: a practical and powerful approach to multiple testing | journal=Journal of the Royal Statistical Society, Series B | volume=57 | pages=125–133 | issue=1 | jstor=2346101}}
11. ^{{cite journal | last=Storey | first=JD |author2=Tibshirani, Robert | year=2003 | title=Statistical significance for genome-wide studies | journal=PNAS | volume=100 | pages=9440–9445 | doi=10.1073/pnas.1530509100 | pmid=12883005 | issue=16 | pmc=170937 | jstor=3144228| bibcode=2003PNAS..100.9440S }}
12. ^{{cite journal | last=Efron | first=Bradley |author2=Tibshirani, Robert |author3=Storey, John D. |author4= Tusher, Virginia | journal=Journal of the American Statistical Association | volume=96 | issue=456 | year=2001 | pages=1151–1160 | title=Empirical Bayes analysis of a microarray experiment | doi=10.1198/016214501753382129 | jstor=3085878}}
13. ^{{Cite journal|last=Noble|first=William S.|date=2009-12-01|title=How does multiple testing correction work?|journal=Nature Biotechnology|language=en|volume=27|issue=12|pages=1135–1137|doi=10.1038/nbt1209-1135|issn=1087-0156|pmc=2907892|pmid=20010596}}
14. ^{{Cite journal|author = Young, S. S., Karr, A.|title = Deming, data and observational studies|journal = Significance|volume = 8|issue = 3|pages = 116–120|year = 2011|url = http://www.niss.org/sites/default/files/Young%20Karr%20Obs%20Study%20Problem.pdf|doi = 10.1111/j.1740-9713.2011.00506.x}}
15. ^{{Cite journal|author = Smith, G. D., Shah, E.|title = Data dredging, bias, or confounding|journal = BMJ|volume = 325|year = 2002|pmc = 1124898|doi = 10.1136/bmj.325.7378.1437|pmid=12493654|issue=7378|pages=1437–1438}}
16. ^{{cite journal | last1 = Kirsch | first1 = A | last2 = Mitzenmacher | first2 = M | author2-link = Michael Mitzenmacher | last3 = Pietracaprina | first3 = A | last4 = Pucci | first4 = G | last5 = Upfal | first5 = E | author5-link = Eli Upfal | last6 = Vandin | first6 = F | title = An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets | url = | journal = Journal of the ACM | volume = 59 | issue = 3 | pages = 12:1–12:22 | doi=10.1145/2220357.2220359 | date=June 2012| arxiv = 1002.1104 }}

Further reading

  • F. Betz, T. Hothorn, P. Westfall (2010), Multiple Comparisons Using R, CRC Press
  • S. Dudoit and M. J. van der Laan (2008), Multiple Testing Procedures with Application to Genomics, Springer
  • B. Phipson and G. K. Smyth (2010), Permutation P-values Should Never Be Zero: Calculating Exact P-values when Permutations are Randomly Drawn, Statistical Applications in Genetics and Molecular Biology Vol.. 9 Iss. 1, Article 39, {{doi|10.2202/1544-6155.1585}}
  • P. H. Westfall and S. S. Young (1993), Resampling-based Multiple Testing: Examples and Methods for p-Value Adjustment, Wiley
  • P. Westfall, R. Tobias, R. Wolfinger (2011) Multiple comparisons and multiple testing using SAS, 2nd edn, SAS Institute
  • A gallery of examples of implausible correlations sourced by data dredging
{{Experimental design}}{{Statistics}}

2 : Statistical hypothesis testing|Multiple comparisons

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/29 13:16:43