请输入您要查询的百科知识:

 

词条 Bayesian information criterion
释义

  1. Definition

  2. Properties

  3. Limitations

  4. Gaussian special case

  5. BIC for high-dimensional model

  6. See also

  7. Notes

  8. References

  9. External links

{{Expert needed|Statistics|talk=SIC vs BIC|date=March 2011}}{{Bayesian statistics}}

In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).

When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC.

The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[1] where he gave a Bayesian argument for adopting it.

Definition

The BIC is formally defined as[2][3]

where

  • = the maximized value of the likelihood function of the model , i.e. , where are the parameter values that maximize the likelihood function;
  • = the observed data;
  • = the number of data points in , the number of observations, or equivalently, the sample size;
  • = the number of parameters estimated by the model. For example, in multiple linear regression, the estimated parameters are the intercept, the slope parameters, and the constant variance of the errors; thus, .

Konishi and Kitigawa (2008, p. 217) derive the BIC to approximate the distribution of the data, integrating out the parameters using Laplace's method, starting with the following:

where is the prior for under model .

The log(likelihood), , is then expanded to a second order Taylor series about the MLE, , assuming it is twice differentiable as follows:

where is the average observed information per observation, and prime () denotes transpose of the vector . To the extent that is negligible and is relatively linear near , we can integrate out to get the following:

As increases, we can ignore and as they are . Thus,

where BIC is defined as above, and either (a) is the Bayesian posterior mode or (b) uses the MLE and the prior has nonzero slope at the MLE. Then the posterior

Properties

{{refimprove section|date=November 2011}}
  • It is independent of the prior.
  • It can measure the efficiency of the parameterized model in terms of predicting the data.
  • It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
  • It is approximately equal to the minimum description length criterion but with negative sign.
  • It can be used to choose the number of clusters according to the intrinsic complexity present in a particular dataset.
  • It is closely related to other penalized likelihood criteria such as Deviance information criterion and the Akaike information criterion.

Limitations

The BIC suffers from two main limitations[4]

  1. the above approximation is only valid for sample size much larger than the number of parameters in the model.
  2. the BIC cannot handle complex collections of models as in the variable selection (or feature selection) problem in high-dimension.&91;4&93;

Gaussian special case

Under the assumption that the model errors or disturbances are independent and identically distributed according to a normal distribution and that the boundary condition that the derivative of the log likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only on n and not on the model):[5]

where is the error variance. The error variance in this case is defined as

which is a biased estimator for the true variance.

In terms of the residual sum of squares (RSS) the BIC is

When testing multiple linear models against a saturated model, the BIC can be rewritten in terms of the

deviance as:[6]

where is the number of model parameters in the test.

When picking from several models, the one with the lowest BIC is preferred. The BIC is an increasing function of the error variance

and an increasing function of k. That is, unexplained variation in the dependent variable and the number of explanatory variables increase the value of BIC. Hence, lower BIC implies either fewer explanatory variables, better fit, or both. The strength of the evidence against the model with the higher BIC value can be summarized as follows:[6]
ΔBIC Evidence against higher BIC
0 to 2 Not worth more than a bare mention
2 to 6 Positive
6 to 10 Strong
>10 Very strong

The BIC generally penalizes free parameters more strongly than the Akaike information criterion, though it depends on the size of n and relative magnitude of n and k.

It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F-test or a likelihood ratio test.{{Citation needed|date=February 2019}}

BIC for high-dimensional model

For high dimensional model with the number of potential variables , and the true model size is bounded by a constant, modified BICs has been proposed in Chen and Chen (2008) and Gao and Song (2010). For high dimensional model with the number of variables , and the true model size is unbounded, a high dimensional BIC has been proposed in Gao and Carroll (2017). The high dimensional BIC is of the form:

where can be any number greater than zero.

Gao and Carroll (2017) proposed a pseudo-likelihood BIC for which the pseudo log-likelihood is used instead of the true log-likelihood. The high dimensional pseudo-likelihood BIC is of the form:

where is an estimated degrees of freedom, and the constant is an unknown constant.

To achieve the theoretical model selection consistency for divergent , the two high dimensional BICs above require the multiplicative factor . However, in practical use, the high dimensional BIC can take a simpler form:

where various choices of the multiplicative factor can be used. In empirical studies, or can be used and it is shown to have good empirical performance.

See also

  • Akaike information criterion
  • Bayesian model comparison
  • Deviance information criterion
  • Hannan–Quinn information criterion
  • Jensen–Shannon divergence
  • Kullback–Leibler divergence
  • Minimum message length

Notes

1. ^{{citation | last=Schwarz |first=Gideon E. |title=Estimating the dimension of a model |journal= Annals of Statistics |year=1978 |volume=6 |issue=2 |pages=461–464 |doi=10.1214/aos/1176344136 |mr=468014 }}.
2. ^{{Cite journal| doi = 10.1111/j.1467-9574.2012.00530.x| volume = 66 | issue = 3 | pages = 217–236| last = Wit | first = Ernst |author2=Edwin van den Heuvel |author3=Jan-Willem Romeyn| title = 'All models are wrong...': an introduction to model uncertainty| journal = Statistica Neerlandica| year = 2012}}
3. ^NOTE: The AIC, AICc and BIC defined by Claeskens and Hjort (2008) is the negative of that defined in this article and in most other standard references.
4. ^{{cite book|last=Giraud|first=C.|year=2015|title=Introduction to high-dimensional statistics|publisher=Chapman & Hall/CRC|isbn=9781482237948}}
5. ^{{cite book|last=Priestley|first=M.B.|year=1981|title=Spectral Analysis and Time Series|publisher=Academic Press|isbn=978-0-12-564922-3}} (p. 375).
6. ^{{Citation| doi = 10.2307/2291091| issn = 0162-1459| volume = 90| issue = 430| pages = 773–795| last1 = Kass| first1 = Robert E.| last2= Raftery| first2= Adrian E.| title = Bayes Factors| journal = Journal of the American Statistical Association| year = 1995| jstor = 2291091}}.

References

  • {{cite journal|last1=Bhat|first1=H. S.|last2=Kumar|first2=N|year=2010|title=On the derivation of the Bayesian Information Criterion|archiveurl=https://web.archive.org/web/20120328065032/http://nscs00.ucmerced.edu/~nkumar4/BhatKumarBIC.pdf|archivedate=28 March 2012|url=http://nscs00.ucmerced.edu/~nkumar4/BhatKumarBIC.pdf}}
  • {{Citation |last=Claeskens |first=G.|author1-link= Gerda Claeskens |first2=N. L. |last2=Hjort|author2-link=Nils Lid Hjort |year=2008 |title=Model Selection and Model Averaging |publisher=Cambridge University Press |isbn= }}. NOTE: The AIC and AICc defined by Claeskens and Hjort are the negative of that defined by most other authors.
  • {{cite journal|last=Findley|first=D. F.|year=1991|title=Counterexamples to parsimony and BIC|journal=Annals of the Institute of Statistical Mathematics|volume=43|issue=3|pages=505–514|doi=10.1007/BF00053369}}
  • {{cite journal|last1=Kass|first1=R. E.|last2=Wasserman|first2=L.|year=1995|title=A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion|journal=Journal of the American Statistical Association|volume=90|issue=431|pages=928–934|doi=10.2307/2291327|jstor=2291327}}
  • {{cite book | last1 = Konishi | first1 = Sadanori | last2 = Kitagawa | first2 = Genshiro | year = 2008

| title = Information criteria and statistical modeling
| publisher = Springer | isbn = 978-0-387-71886-6}}
  • {{cite journal|last=Liddle|first=A. R.|year=2007|title=Information criteria for astrophysical model selection|journal=Monthly Notices of the Royal Astronomical Society|volume=377|pages=L74–L78|doi=10.1111/j.1745-3933.2007.00306.x|arxiv=astro-ph/0701113|bibcode=2007MNRAS.377L..74L}}
  • {{cite book|last1=McQuarrie|first1=A. D. R.|last2=Tsai|first2=C.-L.|year=1998|title=Regression and Time Series Model Selection|publisher=World Scientific}}
  • {{cite journal|last1=Chen|first1=J.|last2=Chen|first2=Z.|year=2008|title=Extended Bayesian information criteria for model selection with large model spaces.|journal=Biometrika|volume=95|issue=3|pages=759–771|doi=10.1093/biomet/asn034|citeseerx=10.1.1.505.2456}}
  • {{cite journal|last1=Gao|first1=X.|last2=Song|first2=P.|year=2010|title=Composite likelihood Bayesian information criteria for model selection in high-dimensional data.|journal=Journal of the American Statistical Association|volume=105|issue=492|pages=1531–1540|doi=10.1198/jasa.2010.tm09414}}
  • {{cite journal|last1=Gao|first1=X.|last2=Carroll|first2=R. J.|year=2017|title=Data integration with high dimensionality.|journal=Biometrika|volume=104|pages=251–272|doi=10.1093/biomet/asx023}}

External links

  • Information Criteria and Model Selection
  • [https://arxiv.org/pdf/1207.0520.pdf Sparse Vector Autoregressive Modeling]
{{DEFAULTSORT:Bayesian Information Criterion}}Informationskriterium

3 : Model selection|Bayesian inference|Regression variable selection

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/11 17:19:13