请输入您要查询的百科知识:

 

词条 Estimation statistics
释义

  1. History

  2. Methodology

     Gardner-Altman plot   Cumming plot    Other methodologies  

  3. Flaws in hypothesis testing

  4. Benefits of estimation statistics

     Advantages of confidence intervals  Evidence-based statistics  Precision planning 

  5. See also

  6. References

{{distinguish|Estimation theory}}{{other uses | Estimation (disambiguation)}}Estimation statistics is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results.[1] It is distinct from null hypothesis significance testing (NHST), which is considered to be less informative.[2][3] Estimation statistics, or simply estimation, is also known as the new statistics,[3] a distinction introduced in the fields of psychology, medical research, life sciences and a wide range of other experimental sciences where NHST still remains prevalent,[5] despite estimation statistics having been recommended as preferable for several decades.[3][4]

The primary aim of estimation methods is to report an effect size (a point estimate) along with its confidence interval, the latter of which is related to the precision of the estimate.[8] The confidence interval summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a P value as an unhelpful distraction from the important business of reporting an effect size with its confidence intervals,[5] and believe that estimation should replace significance testing for data analysis.[6]

History

Physics has for long employed a weighted averages method that is similar to meta-analysis.[7]

Estimation statistics in the modern era started with the development of the standardized effect size by Jacob Cohen in the 1960s. Research synthesis using estimation statistics was pioneered by Gene V. Glass with the development of the method of meta-analysis in the 1970s.[8] Estimation methods have been refined since by Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, Geoff Cumming and others. The systematic review, in conjunction with meta-analysis, is a related technique with widespread use in medical research. There are now over 60,000 citations to "meta-analysis" in PubMed. Despite the widespread adoption of meta-analysis, the estimation framework is still not routinely used in primary biomedical research.[9]

In the 1990s, editor Kenneth Rothman banned the use of p-values from the journal Epidemiology; compliance was high among authors but this did not substantially change their analytical thinking.[10]

More recently, estimation methods are being adopted in fields such as neuroscience,[11] psychology education[12] and psychology.[13]

The Publication Manual of the American Psychological Association recommends estimation over hypothesis testing.[14] The Uniform Requirements for Manuscripts Submitted to Biomedical Journals document makes a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size."[15]

Methodology

Many significance tests have an estimation counterpart;[16] in almost every case, the test result (or its p-value) can be simply substituted with the effect size and a precision estimate. For example, instead of using Student's t-test, the analyst can compare two independent groups by calculating the mean difference and its 95% confidence interval. Corresponding methods can be used for a paired t-test and multiple comparisons. Similarly, for a regression analysis, an analyst would report the coefficient of determination (R2) and the model equation instead of the model's p-value.

However, proponents of estimation statistics warn against reporting only a few numbers. Rather, it is advised to analyze and present data using data visualization.[2][4][8] Examples of appropriate visualizations include the Scatter plot for regression, and Gardner-Altman plots for two independent groups.[17] While historical data-group plots (bar charts, box plots, and violin plots) do not display the comparison, estimation plots add a second axis to explicitly visualize the effect size [18].

Gardner-Altman plot

The Gardner-Altman mean difference plot was first described by Martin Gardner and Doug Altman in 1986[17]; it is a statistical graph designed to display data from two independent groups.[4] There is also a version suitable for paired data. The key instructions to make this chart are as follows: (1) display all observed values for both groups side-by-side; (2) place a second axis on the right, shifted to show the mean difference scale; and (3) plot the mean difference with its confidence interval as a marker with error bars.[3] Gardner-Altman plots can be generated with custom code using Ggplot2, [https://seaborn.pydata.org/generated/seaborn.swarmplot.html seaborn], or [https://github.com/ACCLAB/DABEST-python DABEST]; alternatively, the analyst can use user-friendly software like the Estimation Stats app.

Cumming plot

For multiple groups, Geoff Cumming introduced the use of a secondary panel to plot two or more mean differences and their confidence intervals, placed below the observed values panel[3]; this arrangement enables easy comparison of mean differences ('deltas') over several data groupings. Cumming plots can be generated with the [https://thenewstatistics.com/itns/esci/ ESCI package], [https://github.com/ACCLAB/DABEST-python DABEST], or the Estimation Stats app.

Other methodologies

In addition to the mean difference, there are numerous other effect size types, all with relative benefits. Major types include Cohen's d-type effect sizes, and the coefficient of determination (R2) for regression analysis. For non-normal distributions, there are a number of more [https://garstats.wordpress.com/2016/05/02/robust-effect-sizes-for-2-independent-groups/ robust effect sizes], including Cliff's delta and the Kolmogorov-Smirnov statistic.

Flaws in hypothesis testing

{{main article|Statistical hypothesis testing#Criticism}}{{see also|p-value#Criticism}}

In hypothesis testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of hypothesis testing[19][20] for the following reasons, among others:

  • P-values are easily and commonly misinterpreted. For example, the p-value is often mistakenly thought of as 'the probability that the null hypothesis is true.'
  • The null hypothesis is always wrong for every set of observations: there is always some effect, even if it is minuscule.[21]
  • Hypothesis testing produces arbitrarily dichotomous yes-no answers, while discarding important information about magnitude.[22]
  • Any particular p-value arises through the interaction of the effect size, the sample size (all things being equal a larger sample size produces a smaller p-value) and sampling error.[23]
  • At low power, simulation reveals that sampling error makes p-values extremely volatile.[24]

Benefits of estimation statistics

Advantages of confidence intervals

Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of capturing the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller (more precise). In addition, 95% confidence intervals are also 83% prediction intervals: one experiment's confidence interval has an 83% chance of capturing any future experiment's mean.[19] As such, knowing a single experiment's 95% confidence intervals gives the analyst a plausible range for the population mean, and plausible outcomes of any subsequent replication experiments.

Evidence-based statistics

Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values.[25]

Precision planning

The precision of an estimate is formally defined as 1/variance, and like power, increases (improves) with increasing sample size. Like power, a high level of precision is expensive; research grant applications would ideally include precision/cost analyses. Proponents of estimation believe precision planning should replace power since statistical power itself is conceptually linked to significance testing.[19]

See also

{{Portal|Statistics}}
  • Effect size
  • Cohen's h
  • Interval estimation
  • Meta-analysis
  • Statistical significance

References

1. ^{{cite web|last=Ellis|first=Paul|title=Effect size FAQ|url=http://effectsizefaq.com/}}
2. ^{{cite web|last=Cohen|first=Jacob|title=The earth is round (p<.05)|url=https://www.ics.uci.edu/~sternh/courses/210/cohen94_pval.pdf}}
3. ^{{cite book|last=Altman|first=Douglas|title=Practical Statistics For Medical Research|year=1991|publisher=Chapman and Hall|location=London}}
4. ^{{cite book|title=Statistics with Confidence|year=2000|publisher=Wiley-Blackwell|location=London|editor=Douglas Altman}}
5. ^{{cite web|last=Ellis|first=Paul|title=Why can't I just judge my result by looking at the p value?|url=http://effectsizefaq.com/2010/05/31/why-can%E2%80%99t-i-just-judge-my-result-by-looking-at-the-p-value/|accessdate=5 June 2013|date=2010-05-31}}
6. ^{{Cite journal|last=Claridge-Chang|first=Adam|last2=Assam|first2=Pryseley N|title=Estimation statistics should replace significance testing|journal=Nature Methods|volume=13|issue=2|pages=108–109|doi=10.1038/nmeth.3729|pmid=26820542|year=2016}}
7. ^{{cite journal|last=Hedges|first=Larry|title=How hard is hard science, how soft is soft science|journal=American Psychologist|year=1987|volume=42|issue=5|page=443|doi=10.1037/0003-066x.42.5.443|citeseerx=10.1.1.408.2317}}
8. ^{{cite book|last=Hunt|first=Morton|title=How science takes stock: the story of meta-analysis|year=1997|publisher=The Russell Sage Foundation|location=New York|isbn=978-0-87154-398-1}}
9. ^{{cite journal|last=Button|first=Katherine |author2=John P. A. Ioannidis |author3=Claire Mokrysz |author4=Brian A. Nosek |author5=Jonathan Flint |author6=Emma S. J. Robinson |author7=Marcus R. Munafò|title=Power failure: why small sample size undermines the reliability of neuroscience|journal=Nature Reviews Neuroscience|year=2013|volume=14|issue=5 |pages=365–76 |doi=10.1038/nrn3475 |pmid=23571845}}
10. ^{{cite journal|last=Fidler|first=Fiona|title=Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think|journal=Psychological Science|volume=15|issue=2|pages=119–126|url=http://pss.sagepub.com/content/15/2/119.abstract|doi=10.1111/j.0963-7214.2004.01502008.x|pmid=14738519|year=2004}}
11. ^{{Cite journal|last=Yildizoglu|first=Tugce|last2=Weislogel|first2=Jan-Marek|last3=Mohammad|first3=Farhan|last4=Chan|first4=Edwin S.-Y.|last5=Assam|first5=Pryseley N.|last6=Claridge-Chang|first6=Adam|date=2015-12-08|title=Estimating Information Processing in a Memory System: The Utility of Meta-analytic Methods for Genetics|journal=PLOS Genet|volume=11|issue=12|pages=e1005718|doi=10.1371/journal.pgen.1005718|issn=1553-7404|pmc=4672901|pmid=26647168}}
12. ^{{cite journal|last=Hentschke|first=Harald|author2=Maik C. Stüttgen|title=Computation of measures of effect size for neuroscience data sets|journal=European Journal of Neuroscience|date=December 2011|volume=34|issue=12|pages=1887–1894|doi=10.1111/j.1460-9568.2011.07902.x|pmid=22082031}}
13. ^{{cite web|last=Cumming|first=Geoff|title=ESCI (Exploratory Software for Confidence Intervals)|url=http://www.latrobe.edu.au/psy/research/projects/esci}}
14. ^{{cite web|title=Publication Manual of the American Psychological Association, Sixth Edition|url=http://www.apastyle.org/manual/index.aspx|accessdate=17 May 2013}}
15. ^{{cite web|title=Uniform Requirements for Manuscripts Submitted to Biomedical Journals|url=http://www.icmje.org/manuscript_1prepare.html|accessdate=17 May 2013|deadurl=yes|archiveurl=https://web.archive.org/web/20130515225111/http://www.icmje.org/manuscript_1prepare.html|archivedate=15 May 2013|df=}}
16. ^{{Cite book|title=Introduction to the New Statistics: Estimation, Open Science, and Beyond|last=Cumming|first=Geoff|last2=Calin-Jageman|first2=Robert|publisher=Routledge|year=2016|isbn=978-1138825529|location=|pages=}}
17. ^{{Cite journal|last=Gardner|first=M. J.|last2=Altman|first2=D. G.|date=1986-03-15|title=Confidence intervals rather than P values: estimation rather than hypothesis testing|journal=British Medical Journal (Clinical Research Ed.)|volume=292|issue=6522|pages=746–750|issn=0267-0623|pmc=1339793|pmid=3082422}}
18. ^{{Cite journal|url=https://www.biorxiv.org/content/early/2018/07/26/377978|title=Moving beyond P values: Everyday data analysis with estimation plots|journal=bioRxiv|pages=377978|last=Ho|first=Joses|last2=Tumkaya|archive-url=|archive-date=|dead-url=|access-date=|last3=Aryal|last4=Choi|last5=Claridge-Chang|doi=10.1101/377978|year=2018}}
19. ^{{cite book|last=Cumming|first=Geoff|title=Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis|year=2012|publisher=Routledge|location=New York}}
20. ^{{cite journal|last=Cohen|first=Jacob|title=What I have Learned (So Far)|journal=American Psychologist|year=1990|volume=45|issue=12|page=1304|doi=10.1037/0003-066x.45.12.1304}}
21. ^{{cite journal|last=Cohen|first=Jacob|title=The earth is round (p < .05).|journal=American Psychologist|year=1994|volume=49|issue=12|pages=997–1003|doi=10.1037/0003-066X.49.12.997}}
22. ^{{cite book|last=Ellis|first=Paul|title=The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results|year=2010|publisher=Cambridge University Press|location=Cambridge}}
23. ^{{cite book|title=The Significance Test Controversy: A Reader|year=2006|publisher=Aldine Transaction|isbn=978-0202308791|editor=Denton E. Morrison, Ramon E. Henkel}}
24. ^{{cite web|last=Cumming|first=Geoff|title=Dance of the p values|url=https://www.youtube.com/watch?v=ez4DgdurRPg}}
25. ^{{cite journal|last=Beyth-Marom|first=R|author2=Fidler, F. |author3=Cumming, G. |title=Statistical cognition: Towards evidence-based practice in statistics and statistics education|journal=Statistics Education Research Journal|year=2008|volume=7|pages=20–39|accessdate=}}
{{Statistics}}

2 : Estimation theory|Effect size

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/12 0:42:11