词条 | Chi-squared test | |||||||||||||||||||||||||||||
释义 |
A chi-squared test, also written as {{math|χ2}} test, is any statistical hypothesis test where the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Without other qualification, 'chi-squared test' often is used as short for Pearson's chi-squared test. The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. In the standard applications of this test, the observations are classified into mutually exclusive classes, and there is some theory, or say null hypothesis, which gives the probability that any observation falls into the corresponding class. The purpose of the test is to evaluate how likely the observations that are made would be, assuming the null hypothesis is true. Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent. Also considered a chi-squared test is a test in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough. HistoryIn the 19th century, statistical analytical methods were mainly applied in biological data analysis and it was customary for researchers to assume that observations followed a normal distribution, such as Sir George Airy and Professor Merriman, whose works were criticized by Karl Pearson in his 1900 paper.[1] Until the end of 19th century, Pearson noticed the existence of significant skewness within some biological observations. In order to model the observations regardless of being normal or skewed, Pearson, in a series of articles published from 1893 to 1916,[2][3][4][5] devised the Pearson distribution, a family of continuous probability distributions, which includes the normal distribution and many skewed distributions, and proposed a method of statistical analysis consisting of using the Pearson distribution to model the observation and performing the test of goodness of fit to determine how well the model and the observation really fit. Pearson's chi-squared test{{see also|Pearson's chi-squared test}}In 1900, Pearson published a paper[1] on the {{math|χ2}} test which is considered to be one of the foundations of modern statistics.[6] In this paper, Pearson investigated the test of goodness of fit. Suppose that {{mvar|n}} observations in a random sample from a population are classified into {{mvar|k}} mutually exclusive classes with respective observed numbers {{mvar|xi}} (for {{math|i {{=}} 1,2,…,k}}), and a null hypothesis gives the probability {{mvar|pi}} that an observation falls into the {{mvar|i}}th class. So we have the expected numbers {{math|mi {{=}} npi}} for all {{mvar|i}}, where Pearson proposed that, under the circumstance of the null hypothesis being correct, as {{math|n → ∞}} the limiting distribution of the quantity given below is the {{math|χ2}} distribution. Pearson dealt first with the case in which the expected numbers {{mvar|mi}} are large enough known numbers in all cells assuming every {{mvar|xi}} may be taken as normally distributed, and reached the result that, in the limit as {{mvar|n}} becomes large, {{math|X{{isup|2}}}} follows the {{math|χ2}} distribution with {{math|k − 1}} degrees of freedom. However, Pearson next considered the case in which the expected numbers depended on the parameters that had to be estimated from the sample, and suggested that, with the notation of {{mvar|mi}} being the true expected numbers and {{math|m′i}} being the estimated expected numbers, the difference will usually be positive and small enough to be omitted. In a conclusion, Pearson argued that if we regarded {{math|X′{{isup|2}}}} as also distributed as {{math|χ2}} distribution with {{math|k − 1}} degrees of freedom, the error in this approximation would not affect practical decisions. This conclusion caused some controversy in practical applications and was not settled for 20 years until Fisher's 1922 and 1924 papers.[7][8] Other examples of chi-squared testsOne test statistic that follows a chi-squared distribution exactly is the test that the variance of a normally distributed population has a given value based on a sample variance. Such tests are uncommon in practice because the true variance of the population is usually unknown. However, there are several statistical tests where the chi-squared distribution is approximately valid: Fisher's exact testFor an exact test used in place of the 2 x 2 chi-squared test for independence, see Fisher's exact test. Binomial testFor an exact test used in place of the 2 x 1 chi-squared test for goodness of fit, see Binomial test. Other chi-squared tests
Yates's correction for continuity{{Main|Yates's correction for continuity}}Using the chi-squared distribution to interpret Pearson's chi-squared statistic requires one to assume that the discrete probability of observed binomial frequencies in the table can be approximated by the continuous chi-squared distribution. This assumption is not quite correct and introduces some error. To reduce the error in approximation, Frank Yates suggested a correction for continuity that adjusts the formula for Pearson's chi-squared test by subtracting 0.5 from the absolute difference between each observed value and its expected value in a {{nowrap|2 × 2}} contingency table.[9] This reduces the chi-squared value obtained and thus increases its p-value. Chi-squared test for variance in a normal populationIf a sample of size {{math|n}} is taken from a population having a normal distribution, then there is a result (see distribution of the sample variance) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of {{math|n}} product items whose variation is to be tested. The test statistic {{math|T}} in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then {{math|T}} has a chi-squared distribution with {{math|n − 1}} degrees of freedom. For example, if the sample size is 21, the acceptance region for {{math|T}} with a significance level of 5% is between 9.59 and 34.17. Example chi-squared test for categorical dataSuppose there is a city of 1,000,000 residents with four neighborhoods: {{math|A}}, {{math|B}}, {{math|C}}, and {{math|D}}. A random sample of 650 residents of the city is taken and their occupation is recorded as "white collar", "blue collar", or "no collar". The null hypothesis is that each person's neighborhood of residence is independent of the person's occupational classification. The data are tabulated as:
Let us take the sample living in neighborhood {{math|A}}, 150, to estimate what proportion of the whole 1,000,000 live in neighborhood {{math|A}}. Similarly we take {{sfrac|349|650}} to estimate what proportion of the 1,000,000 are white-collar workers. By the assumption of independence under the hypothesis we should "expect" the number of white-collar workers in neighborhood {{math|A}} to be Then in that "cell" of the table, we have The sum of these quantities over all of the cells is the test statistic. Under the null hypothesis, it has approximately a chi-squared distribution whose number of degrees of freedom are If the test statistic is improbably large according to that chi-squared distribution, then one rejects the null hypothesis of independence. A related issue is a test of homogeneity. Suppose that instead of giving every resident of each of the four neighborhoods an equal chance of inclusion in the sample, we decide in advance how many residents of each neighborhood to include. Then each resident has the same chance of being chosen as do all residents of the same neighborhood, but residents of different neighborhoods would have different probabilities of being chosen if the four sample sizes are not proportional to the populations of the four neighborhoods. In such a case, we would be testing "homogeneity" rather than "independence". The question is whether the proportions of blue-collar, white-collar, and no-collar workers in the four neighborhoods are the same. However, the test is done in the same way. ApplicationsIn cryptanalysis, chi-squared test is used to compare the distribution of plaintext and (possibly) decrypted ciphertext. The lowest value of the test means that the decryption was successful with high probability.[10][11] This method can be generalized for solving modern cryptographic problems.[12] In bioinformatics, chi-squared test is used to compare the distribution of certain property of genes (e.g., genomic content, mutation rate, interaction network clustering, etc.) belonging different categories (e.g., disease genes, essential genes, genes on a certain chromosome etc.).[13][14] See also{{Portal|Statistics}}
References1. ^1 {{cite journal | last = Pearson | first = Karl | author-link = Karl Pearson | title = On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling | journal = Philosophical Magazine |series=Series 5 | volume = 50 | year = 1900 | pages = 157–175 | url = http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf | doi = 10.1080/14786440009463897}} 2. ^{{cite journal | last = Pearson | first = Karl | author-link = Karl Pearson | title = Contributions to the mathematical theory of evolution [abstract] | journal = Proceedings of the Royal Society | volume = 54 | year = 1893 | pages = 329–333 | JSTOR = 115538 | doi = 10.1098/rspl.1893.0079}} 3. ^{{cite journal | last = Pearson | first = Karl | author-link = Karl Pearson | title = Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material | journal = Philosophical Transactions of the Royal Society | volume = 186 | year = 1895 | pages = 343–414 | bibcode = 1895RSPTA.186..343P | JSTOR = 90649 | doi = 10.1098/rsta.1895.0010}} 4. ^{{cite journal | last = Pearson | first = Karl | author-link = Karl Pearson | title = Mathematical contributions to the theory of evolution, X: Supplement to a memoir on skew variation | journal = Philosophical Transactions of the Royal Society A | volume = 197 | year = 1901 | pages = 443–459 | bibcode = 1901RSPTA.197..443P | JSTOR = 90841 | doi = 10.1098/rsta.1901.0023}} 5. ^{{cite journal | last = Pearson | first = Karl | author-link = Karl Pearson | title = Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew variation | journal = Philosophical Transactions of the Royal Society A | volume = 216 | year = 1916 | pages = 429–457 | bibcode = 1916RSPTA.216..429P | JSTOR = 91092 | doi = 10.1098/rsta.1916.0009}} 6. ^{{cite journal | last = Cochran | first = William G. | author-link = William G. Cochran | title = The Chi-square Test of Goodness of Fit | journal = The Annals of Mathematical Statistics | volume = 23 | year = 1952 | pages = 315–345 | JSTOR = 2236678 | doi=10.1214/aoms/1177729380}} 7. ^{{cite journal | last = Fisher | first = Ronald A. | author-link = Ronald A. Fisher | title = On the Interpretation of chi-squared from Contingency Tables, and the Calculation of P | journal = Journal of the Royal Statistical Society | volume = 85 | year = 1922 | pages = 87–94 | JSTOR = 2340521 | doi=10.2307/2340521}} 8. ^{{cite journal | last = Fisher | first = Ronald A. | author-link = Ronald A. Fisher | title = The Conditions Under Which chi-squared Measures the Discrepancey Between Observation and Hypothesis | journal = Journal of the Royal Statistical Society | volume = 87 | year = 1924 | pages = 442–450 | JSTOR = 2341149}} 9. ^{{cite journal|authorlink=Frank Yates|last=Yates|first=Frank|date=1934|title=Contingency table involving small numbers and the {{math|χ2}} test|journal=Supplement to the Journal of the Royal Statistical Society|volume=1|issue=2|pages=217–235|jstor=2983604}} 10. ^{{cite web|title=Chi-squared Statistic|url=http://practicalcryptography.com/cryptanalysis/text-characterisation/chi-squared-statistic/|website=Practical Cryptography|accessdate=18 February 2015}} 11. ^{{cite web|title=Using Chi Squared to Crack Codes|url=http://ibmathsresources.com/2014/06/15/using-chi-squared-to-crack-codes/|website=IB Maths Resources|publisher=British International School Phuket}} 12. ^{{cite journal|last1=Ryabko|first1=B. Ya.|last2=Stognienko|first2=V. S.|last3=Shokin|first3=Yu. I.|title=A new test for randomness and its application to some cryptographic problems|journal=Journal of Statistical Planning and Inference|date=2004|volume=123|pages=365–376|url=http://boris.ryabko.net/jspi.pdf|accessdate=18 February 2015|doi=10.1016/s0378-3758(03)00149-6}} 13. ^{{cite journal|last1=Feldman|first1=I.|last2=Rzhetsky|first2=A.|last3=Vitkup|first3=D.|title=Network properties of genes harboring inherited disease mutations|journal=PNAS|date=2008|volume=105 (11)|pages=4323–432|url=http://www.pnas.org/content/105/11/4323|accessdate=29 June 2018|doi=10.1073/pnas.0701722105|bibcode=2008PNAS..105.4323F|pmc=2393821}} 14. ^{{cite web|title=chi-square-tests|url=https://visa.pharmacy.wsu.edu/bioinformatics/documents/chi-square-tests.pdf|accessdate=29 June 2018}} Further reading
2 : Statistical tests for contingency tables|Nonparametric statistics |
|||||||||||||||||||||||||||||
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。