词条 | Conceptual clustering |
释义 |
Conceptual clustering is a machine learning paradigm for unsupervised classification developed mainly during the 1980s. It is distinguished from ordinary data clustering by generating a concept description for each generated class. Most conceptual clustering methods are capable of generating hierarchical category structures; see Categorization for more information on hierarchy. Conceptual clustering is closely related to formal concept analysis, decision tree learning, and mixture model learning. Conceptual clustering vs. data clusteringConceptual clustering is obviously closely related to data clustering; however, in conceptual clustering it is not only the inherent structure of the data that drives cluster formation, but also the Description language which is available to the learner. Thus, a statistically strong grouping in the data may fail to be extracted by the learner if the prevailing concept description language is incapable of describing that particular regularity. In most implementations, the description language has been limited to feature conjunction, although in COBWEB (see "COBWEB" below), the feature language is probabilistic. List of published algorithmsA fair number of algorithms have been proposed for conceptual clustering. Some examples are given below:
More general discussions and reviews of conceptual clustering can be found in the following publications:
Example: A basic conceptual clustering algorithmThis section discusses the rudiments of the conceptual clustering algorithm COBWEB. There are many other algorithms using different heuristics and "category goodness" or category evaluation criteria, but COBWEB is one of the best known. The reader is referred to the bibliography for other methods. Knowledge representationThe COBWEB data structure is a hierarchy (tree) wherein each node represents a given concept. Each concept represents a set (actually, a multiset or bag) of objects, each object being represented as a binary-valued property list. The data associated with each tree node (i.e., concept) are the integer property counts for the objects in that concept. For example, (see figure), let a concept contain the following four objects (repeated objects being permitted).
The three properties might be, for example, The figure to the right shows a concept tree with five concepts. is the root concept, which contains all ten objects in the data set. Concepts and are the children of , the former containing four objects, and the later containing six objects. Concept is also the parent of concepts , , and , which contain three, two, and one object, respectively. Note that each parent node (relative superordinate concept) contains all the objects contained by its child nodes (relative subordinate concepts). In Fisher's (1987) description of COBWEB, he indicates that only the total attribute counts (not conditional probabilities, and not object lists) are stored at the nodes. Any probabilities are computed from the attribute counts as needed. The COBWEB languageThe description language of COBWEB is a "language" only in a loose sense, because being fully probabilistic it is capable of describing any concept. However, if constraints are placed on the probability ranges which concepts may represent, then a stronger language is obtained. For example, we might permit only concepts wherein at least one probability differs from 0.5 by more than . Under this constraint, with , a concept such as Evaluation criterionIn Fisher's (1987) description of COBWEB, the measure he uses to evaluate the quality of the hierarchy is Gluck and Corter's (1985) category utility (CU) measure, which he re-derives in his paper. The motivation for the measure is highly similar to the "information gain" measure introduced by Quinlan for decision tree learning. It has previously been shown that the CU for feature-based classification is the same as the mutual information between the feature variables and the class variable (Gluck & Corter, 1985; Corter & Gluck, 1992), and since this measure is much better known, we proceed here with mutual information as the measure of category "goodness". What we wish to evaluate is the overall utility of grouping the objects into a particular hierarchical categorization structure. Given a set of possible classification structures, we need to determine whether one is better than another. References{{refbegin|2}}
|author1=Biswas, G. |author2=Weinberg, J. B. |author3=Fisher, Douglas H. | year = 1998 | title = Iterate: A conceptual clustering algorithm for data mining | journal = IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews | volume = 28 |issue=2 | pages = 100–111|doi=10.1109/5326.669556 }}
|author1=Carpineto, C. |author2=Romano, G. | year = 1993 | title = Galois: An order-theoretic approach to conceptual clustering | booktitle = Proceedings of 10th International Conference on Machine Learning, Amherst | pages = 33–40}}
| author = Fisher, Douglas H. | year = 1987 | title = Knowledge acquisition via incremental conceptual clustering | journal = Machine Learning | volume = 2 | pages = 139–172 | doi = 10.1007/BF00114265 | issue = 2}}
| author = Fisher, Douglas H. | year=1996 | title = Iterative optimization and simplification of hierarchical clusterings | journal = Journal of Artificial Intelligence Research | volume = 4 | pages = 147–178 | doi=10.1613/jair.276}}
|author1=Fisher, Douglas H. |author2=Langley, Patrick W. | year = 1986 | title = Conceptual clustering and its relation to numerical taxonomy | editor = Gale, W. A. | booktitle = Artificial Intelligence and Statistics | publisher = Addison-Wesley | location = Reading, MA | pages = 77–116}}
|author1=Fisher, Douglas H. |author2=Pazzani, Michael J. | year = 1991 | title = Computational models of concept learning |editor1=Fisher, D. H. |editor2=Pazzani, M. J. |editor3=Langley, P. | booktitle = Concept Formation: Knowledge and Experience in Unsupervised Learning | publisher = Morgan Kaufmann | location = San Mateo, CA | chapter = 1 | pages = 3–43}}
|author1=Gennari, John H. |author2=Langley, Patrick W. |author3=Fisher, Douglas H. | year = 1989 | title = Models of incremental concept formation | journal = Artificial Intelligence | volume = 40 |issue=1–3 | pages = 11–61 | doi = 10.1016/0004-3702(89)90046-5}}
|author1=Hanson, S. J. |author2=Bauer, M. | year = 1989 | title = Conceptual clustering, categorization, and polymorphy | journal = Machine Learning | volume = 3 | pages = 343–372 | doi = 10.1007/BF00116838 | issue = 4}}
|author1=Jonyer, I. |author2=Cook, D. J. |author3=Holder, L. B. | year = 2001 | title = Graph-based hierarchical conceptual clustering | journal = Journal of Machine Learning Research | volume = 2 | pages = 19–43 | doi = 10.1162/153244302760185234}}
| author = Lebowitz, M. | year = 1987 | title = Experiments with incremental concept formation | journal = Machine Learning | volume = 2 | pages = 103–138 | doi = 10.1007/BF00114264 | issue = 2}}
| author = Michalski, R. S. | year = 1980 | title = Knowledge acquisition through conceptual clustering: A theoretical framework and an algorithm for partitioning data into conjunctive concepts | journal = International Journal of Policy Analysis and Information Systems | volume = 4 | pages = 219–244}}
|author1=Michalski, R. S. |author2=Stepp, R. E. | year = 1983 | title = Learning from observation: Conceptual clustering |editor1=Michalski, R. S. |editor2=Carbonell, J. G. |editor3=Mitchell, T. M. | booktitle = Machine Learning: An Artificial Intelligence Approach | publisher = Tioga | location = Palo Alto, CA | pages = 331–363}}
|author1=Stepp, R. E. |author2=Michalski, R. S. | year = 1986 | title = Conceptual clustering: Inventing goal-oriented classifications of structured objects |editor1=Michalski, R. S. |editor2=Carbonell, J. G. |editor3=Mitchell, T. M. | booktitle = Machine Learning: An Artificial Intelligence Approach | publisher = Morgan Kaufmann | location = Los Altos, CA | pages = 471–498}}
|author1=Talavera, L. |author2=Béjar, J. | year = 2001 | title = Generality-based conceptual clustering with probabilistic concepts | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 23 | pages = 196–206 | doi = 10.1109/34.908969 | issue = 2}}{{refend}} External links
3 : Learning methods|Classification algorithms|Unsupervised learning |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。