请输入您要查询的百科知识:

 

词条 Chow–Liu tree
释义

  1. The Chow–Liu representation

  2. The Chow–Liu algorithm

  3. Variations on Chow–Liu trees

  4. See also

  5. Notes

  6. References

In probability theory and statistics Chow–Liu tree is an efficient method for constructing a second-order product approximation of a joint probability distribution, first described in a paper by {{Harvtxt|Chow|Liu|1968}}. The goals of such a decomposition, as with such Bayesian networks in general, may be either data compression or inference.

The Chow–Liu representation

The Chow–Liu method describes a joint probability distribution as a product of second-order conditional and marginal distributions. For example, the six-dimensional distribution might be approximated as

where each new term in the product introduces just one new variable, and the product can be represented as a first-order dependency tree, as shown in the figure. The Chow–Liu algorithm (below) determines which conditional probabilities are to be used in the product approximation. In general, unless there are no third-order or higher-order interactions, the Chow–Liu approximation is indeed an approximation, and cannot capture the complete structure of the original distribution. {{Harvtxt|Pearl|1988}} provides a modern analysis of the Chow–Liu tree as a Bayesian network.

The Chow–Liu algorithm

Chow and Liu show how to select second-order terms for the product approximation so that, among all such second-order approximations (first-order dependency trees), the constructed approximation has the minimum Kullback–Leibler distance to the actual distribution , and is thus the closest approximation in the classical information-theoretic sense. The Kullback–Leibler distance between a second-order product approximation and the actual distribution is shown to be

where is the mutual information between variable and its parent and is the joint entropy of variable set . Since the terms and are independent of the dependency ordering in the tree, only the sum of the pairwise mutual informations, , determines the quality of the approximation. Thus, if every branch (edge) on the tree is given a weight corresponding to the mutual information between the variables at its vertices, then the tree which provides the optimal second-order approximation to the target distribution is just the maximum-weight tree. The equation above also highlights the role of the dependencies in the approximation: When no dependencies exist, and the first term in the equation is absent, we have only an approximation based on first-order marginals, and the distance between the approximation and the true distribution is due to the redundancies that are not accounted for when the variables are treated as independent. As we specify second-order dependencies, we begin to capture some of that structure and reduce the distance between the two distributions.

Chow and Liu provide a simple algorithm for constructing the optimal tree; at each stage of the procedure the algorithm simply adds the maximum mutual information pair to the tree. See the original paper, {{Harvtxt|Chow|Liu|1968}}, for full details. A more efficient tree construction algorithm for the common case of sparse data was outlined in {{Harvtxt|Meilă|1999}}.

Chow and Wagner proved in a later paper {{Harvtxt|Chow|Wagner|1973}} that the learning of the Chow–Liu tree is consistent given samples (or observations) drawn i.i.d. from a tree-structured distribution. In other words, the probability of learning an incorrect tree decays to zero as the number of samples tends to infinity. The main idea in the proof is the continuity of the mutual information in the pairwise marginal distribution. Recently, the exponential rate of convergence of the error probability was provided.[1]

Variations on Chow–Liu trees

The obvious problem which occurs when the actual distribution is not in fact a second-order dependency tree can still in some cases be addressed by fusing or aggregating together densely connected subsets of variables to obtain a "large-node" Chow–Liu tree {{Harv|Huang|King|2002}}, or by extending the idea of greedy maximum branch weight selection to non-tree (multiple parent) structures {{Harv|Williamson|2000}}. (Similar techniques of variable substitution and construction are common in the Bayes network literature, e.g., for dealing with loops. See {{Harvtxt|Pearl|1988}}.)

Generalizations of the Chow–Liu tree are the so-called t-cherry junction trees. It is proved that the t-cherry junction trees provide a better or at least as good approximation for a discrete multivariate probability distribution as the Chow–Liu tree gives.

For the third order t-cherry junction tree see {{Harv|Kovács|Szántai|2010}}, for the kth-order t-cherry junction tree see {{Harv|Szántai|Kovács|2010}}. The second order t-cherry junction tree is in fact the Chow–Liu tree.

See also

  • Bayesian network
  • Knowledge representation

Notes

1. ^A Large-Deviation Analysis for the Maximum-Likelihood Learning of Tree Structures. V. Y. F. Tan, A. Anandkumar, L. Tong and A. Willsky. In the International symposium on information theory (ISIT), July 2009.

References

{{refbegin|2}}
  • {{Citation

| last=Chow | first=C. K. | last2=Liu | first2=C.N.
| title=Approximating discrete probability distributions with dependence trees
| journal=IEEE Transactions on Information Theory
| volume=IT-14 | issue=3 | year=1968 | pages=462–467 | url= | doi=10.1109/tit.1968.1054142| citeseerx=10.1.1.133.9772 }}.
  • {{Citation

| last=Huang | first=Kaizhu | last2=King | first2=Irwin
| last3=Lyu | first3=Michael R. | year= 2002
| chapter=Constructing a large node Chow–Liu tree based on frequent itemsets
|editor1=Wang, Lipo |editor2=Rajapakse, Jagath C. |editor3=Fukushima, Kunihiko |editor4=Lee, Soo-Young |editor5=Yao, Xin | title=Proceedings of the 9th International Conference on Neural Information Processing ({ICONIP}'02)
| place=Singapore | accessdate= | pages=498–502}}.
  • {{Citation

| last=Pearl | first=Judea
| title=Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
| publisher=Morgan Kaufmann | place=San Mateo, CA | year=1988}}
  • {{Citation

| last=Williamson | first=Jon | year= 2000
| chapter=Approximating discrete probability distributions with Bayesian networks
| title=Proceedings of the International Conference on Artificial Intelligence in Science and Technology
| place=Tasmania | accessdate= | pages=16–20}}.
  • {{Citation

| last=Meilă | first=Marina | year= 1999
| chapter=An Accelerated Chow and Liu Algorithm: Fitting Tree Distributions to High-Dimensional Sparse Data
| title=Proceedings of the Sixteenth International Conference on Machine Learning
| publisher=Morgan Kaufmann | accessdate= | pages=249–257}}.
  • {{Citation

| last=Chow | first=C. K. | last2=Wagner | first2=T.
| title=Consistency of an estimate of tree-dependent probability distribution
| journal=IEEE Transactions on Information Theory
| volume=IT-19 | issue=3 | year=1973 | pages=369–371 | doi=10.1109/tit.1973.1055013}}.
  • {{Citation

| last=Kovács | first=E. | last2=Szántai | first2=T.
| title=On the approximation of a discrete multivariate probability distribution using the new concept of t-cherry junction tree
| journal=Lecture Notes in Economics and Mathematical Systems
| volume=633, Part 1 | year=2010 | pages= 39–56 | doi=10.1007/978-3-642-03735-1_3| isbn=978-3-642-03734-4 }}.
  • {{Citation

| last=Szántai | first=T. | last2=Kovács | first2=E.
| title=Hypergraphs as a mean of discovering the dependence structure of a discrete multivariate probability distribution
| journal=Annals of Operations Research | year=2010 | pages= }}.{{refend}}{{DEFAULTSORT:Chow-Liu tree}}

1 : Knowledge representation

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/13 18:14:25