请输入您要查询的百科知识:

 

词条 Decision stump
释义

  1. References

{{Short description|Boolean classifier from one decision}}

A decision stump is a machine learning model consisting of a one-level decision tree.[1] That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature. Sometimes they are also called 1-rules.[1]

Depending on the type of the input feature, several variations are possible. For nominal features, one may build a stump which contains a leaf for each possible feature value[2][3] or a stump with the two leaves, one of which corresponds to some chosen category, and the other leaf to all the other categories.[4] For binary features these two schemes are identical. A missing value may be treated as a yet another category.[4]

For continuous features, usually, some threshold feature value is selected, and the stump contains two leaves — for values below and above the threshold. However, rarely, multiple thresholds may be chosen and the stump therefore contains three or more leaves.

Decision stumps are often[5] used as components (called "weak learners" or "base learners") in machine learning ensemble techniques such as bagging and boosting. For example, a state-of-the-art{{Weasel inline|date=October 2018}} Viola–Jones face detection algorithm employs AdaBoost with decision stumps as weak learners.[6]

The term "decision stump" was coined in a 1992 ICML paper by Wayne Iba and Pat Langley.[7][8]

References

1. ^{{cite web |citeseerx = 10.1.1.67.2711 |title=Very Simple Classification Rules Perform Well on Most Commonly Used Datasets |first=Robert C. |last=Holte |year=1993 }}
2. ^{{cite book |last=Loper |first=Edward L. |last2=Bird, |first2=Steven |last3=Klein |first3=Ewan |title=Natural language processing with Python |publisher=O'Reilly |location=Sebastopol, CA |year=2009 |url=http://nltk.googlecode.com/svn/trunk/doc/book/ch06.html |isbn=0-596-51649-5 }}
3. ^This classifier is implemented in Weka under the name OneR (for "1-rule").
4. ^This is what has been implemented in Weka's DecisionStump classifier.
5. ^Reyzin, Lev; and Schapire, Robert E. (2006); How Boosting the Margin Can Also Boost Classifier Complexity, in ICML′06: Proceedings of the 23rd international conference on Machine Learning, pp. 753-760
6. ^Viola, Paul; and Jones, Michael J. (2004); Robust Real-Time Face Detection, International Journal of Computer Vision, 57(2), 137–154
7. ^Iba, Wayne; and Langley, Pat (1992); Induction of One-Level Decision Trees, in ML92: Proceedings of the Ninth International Conference on Machine Learning, Aberdeen, Scotland, 1–3 July 1992, San Francisco, CA: Morgan Kaufmann, pp. 233–240
8. ^Oliver, Jonathan J.; and Hand, David (1994); Averaging Over Decision Stumps, in Machine Learning: ECML-94, European Conference on Machine Learning, Catania, Italy, April 6–8, 1994, Proceedings, Lecture Notes in Computer Science (LNCS) 784, Springer, pp. 231–241 {{ISBN|3-540-57868-4}} {{doi|10.1007/3-540-57868-4_61}}
Quote: "These simple rules are in effect severely pruned decision trees and have been termed decision stumps [cites Iba and Langley]".
{{DEFAULTSORT:Decision Stump}}

1 : Decision trees

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/11 3:24:33