请输入您要查询的百科知识:

 

词条 Graphoid
释义

  1. History

  2. Definition

  3. Types of graphoids

     Probabilistic graphoids[1][7]  Correlational graphoids[1][7]  Relational graphoids[1][7]  Graph-induced graphoids  DAG-induced graphoids 

  4. Inclusion and construction

  5. References

{{about|the use in logic and probability|the use in matroid theory|Matroid}}{{Orphan|date=September 2014}}

A graphoid is a set of statements of the form, "X is irrelevant to Y given that we know Z" where X, Y and Z are sets of variables. The notion of "irrelevance" and "given that we know" may obtain different interpretations, including probabilistic, relational and correlational, depending on the application. These interpretations share common properties that can be captured by paths in graphs (hence the name "graphoid"). The theory of graphoids characterizes these properties in a finite set of axioms that are common to informational irrelevance and its graphical representations.

History

Judea Pearl and Azaria Paz[1] coined the term "graphoids" after discovering that a set of axioms that govern conditional independence in probability theory is shared by undirected graphs. Variables are represented as nodes in a graph in such a way that variable sets X and Y are independent conditioned on Z in the distribution whenever node set Z separates X from Y in the graph. Axioms for conditional independence in probability were derived earlier by A. Philip Dawid[2] and Wolfgang Spohn.[3]

The correspondence between dependence and graphs was later extended to directed acyclic graphs (DAGs)[4][5][6] and to other models of dependency.[1][7]

Definition

A dependency model M is a subset of triplets (X,Z,Y) for which the predicate I(X,Z,Y): X is independent of Y given Z, is true. A graphoid is defined as a dependency model that is closed under the following five axioms:

  1. Symmetry:
  2. Decomposition:
  3. Weak Union:
  4. Contraction:
  5. Intersection:

A semi-graphoid is a dependency model closed under (i)–(iv). These five axioms together are known as the graphoid axioms.[8] Intuitively, the weak union and contraction properties mean that irrelevant information should not alter the relevance status of other propositions in the system; what was relevant remains relevant and what was irrelevant remains irrelevant.[8]

Types of graphoids

Probabilistic graphoids[1][7]

Conditional independence, defined as

is a semi-graphoid which becomes a full graphoid when P is strictly positive.

Correlational graphoids[1][7]

A dependency model is a correlational graphoid if in some probability function we have,

where is the partial correlation between x and y given set Z.

In other words, the linear estimation error of the variables in X using measurements on Z would not be reduced by adding measurements of the variables in Y, thus making Y irrelevant to the estimation of X. Correlational and probabilistic dependency models coincide for normal distributions.

Relational graphoids[1][7]

A dependency model is a relational graphoid if it satisfies

In words, the range of values permitted for X is not restricted by the choice of Y, once Z is fixed. Independence statements belonging to this model are similar to embedded multi-valued dependencies (EMVD s) in databases.

Graph-induced graphoids

If there exists an undirected graph G such that,

then the graphoid is called graph-induced.

In other words, there exists an undirected graph G such that every independence statement in M is reflected as a vertex separation in G and vice versa. A necessary and sufficient condition for a dependency model to be a graph-induced graphoid is that it satisfies the following axioms: symmetry, decomposition, intersection, strong union and transitivity.

Strong union states that

Transitivity states that

The axioms symmetry, decomposition, intersection, strong union and transitivity constitute a complete characterization of

undirected graphs.[9]

DAG-induced graphoids

A graphoid is termed DAG-induced if there exists a directed acyclic graph D such that where stands for d-separation in D. d-separation (d-connotes "directional") extends the notion of vertex separation from undirected graphs to directed acyclic graphs. It permits the reading of conditional independencies from the structure of Bayesian networks. However, conditional independencies in a DAG cannot be completely characterized by a finite set of axioms.

[10]

Inclusion and construction

Graph-induced and DAG-induced graphoids are

both contained in probabilistic graphoids.[11] This means that for every graph G there exists a probability distribution P such that every conditional independence in P is represented in G, and vice versa. The same is true for DAGs.

However, there are probabilistic distributions that are not graphoids and, moreover, there is no finite axiomatization for probabilistic conditional dependencies.

[12]

Thomas Verma showed that every semi-graphoid has a recursive way of constructing a DAG in which every d-separation is valid.[13]

The construction is similar to that used in Bayes networks and goes as follows:

  1. Arrange the variables in some arbitrary order 1, 2,...,i,...,N and, starting with i = 1,
  2. choose for each node i a set of nodes PAi such that i is independent on all its predecessors, 1, 2,...,i − 1, conditioned on PAi.
  3. Draw arrows from PAi to i and continue.

The DAG created by this construction will represent all the conditional independencies that

follow from those used in the construction. Furthermore, every d-separation shown in the DAG will be a valid conditional independence in the graphoid used in the construction.

References

1. ^{{cite web|last1=Pearl|first1=Judea|last2=Paz|first2=Azaria|title=Graphoids: A Graph-Based Logic for Reasoning About Relevance Relations|year=1985|url=http://ftp.cs.ucla.edu/pub/stat_ser/r53-L.pdf}}
2. ^{{cite journal|last1=Dawid|first1=A. Philip|title=Conditional independence in statistical theory|journal=Journal of the Royal Statistical Society, Series B|date=1979|pages=1–31}}
3. ^{{cite journal|last1=Spohn|first1=Wolfgang|title=Stochastic independence, causal independence, and shieldability|journal=Journal of Philosophical Logic|date=1980|volume=9|pages=73–99|doi=10.1007/bf00258078}}
4. ^{{cite journal|last1=Pearl|first1=Judea|title=Fusion, propagation and structuring in belief networks|journal=Artificial Intelligence|date=1986|volume=29|issue=3|pages=241–288|doi=10.1016/0004-3702(86)90072-x}}
5. ^{{cite journal|last1=Verma|first1=Thomas|last2=Pearl|first2=Judea|title=Causal networks: Semantics and expressiveness|journal=Proceedings of the 4th Workshop on Uncertainty in Artificial Intelligence|date=1988|pages=352–359}}
6. ^{{cite book|last1=Lauritzen|first1=S.L.|title=Graphical Models|date=1996|publisher=Clarendon Press|location=Oxford}}
7. ^{{cite news|last1=Geiger|first1=Dan|title=Graphoids: A Qualitative Framework for Probabilistic Inference|date=1990|format=PhD Dissertation, Technical Report R-142, Computer Science Department, University of California, Los Angeles}}
8. ^{{cite book|last1=Pearl|first1=Judea|title=Probabilistic reasoning in intelligent systems: networks of plausible inference|date=1988|publisher=Morgan Kaufmann}}
9. ^A. Paz, J. Pearl, and S. Ur, "A New Characterization of Graphs Based on Interception Relations" Journal of Graph Theory, Vol. 22, No. 2, 125-136, 1996.
10. ^{{cite journal | author1 = Geiger, D. | title = The non-axiomatizability of dependencies in directed acyclic graphs | journal = UCLA Computer Science Tech Report R-83 | date = 1987 | url=http://ftp.cs.ucla.edu/tech-report/198_-reports/870048.pdf}}
11. ^{{cite journal|last1=Geiger|first1=D.|last2=Pearl|first2=J.|title=Logical and algorithmic properties of conditional independence and graphical models|journal=The Annals of Statistics|date=1993|volume=21|issue=4|pages=2001–2021|doi=10.1214/aos/1176349407}}
12. ^{{ cite journal | author1=Studeny, M. | title=Conditional independence relations have no finite complete characterization | editor1-last=Kubik | editor1-first=S. | editor2-last=Visek | editor2-first=J.A. | journal=Information Theory, Statistical Decision Functions and Random Processes. Transactions of the 11th Prague Conference | date=1992 | volume=B | pages=377–396 | publisher=Kluwer | location=Dordrecht}}
13. ^{{cite journal|last1=Verma|first1=T.|last2=Pearl|first2=J.|editor1-last=Shachter|editor1-first=R.|editor2-last=Levitt|editor2-first=T.S.|editor3-last=Kanal|editor3-first=L.N.|title=Causal Networks: Semantics and Expressiveness|journal=Uncertainty in AI 4|date=1990|pages=69–76|publisher=Elsevier Science Publishers}}

2 : Logic|Probability theory

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/17 6:14:43