词条 | Fusion adaptive resonance theory |
释义 |
}} Fusion adaptive resonance theory (fusion ART)[1][2][3] is a generalization of self-organizing neural networks known as Adaptive Resonance Theory[4] for learning recognition categories (or cognitive codes) across multiple pattern channels. It unifies a number of neural network models, supports several learning paradigms, notably unsupervised learning, supervised learning, and reinforcement learning, and can be applied for domain knowledge integration, memory representation,[5] and modelling of high level cognition. OverviewFusion adaptive resonance theory models is a natural extension of the original adaptive resonance theory (ART)[4][6] models developed by Stephen Grossberg and Gail A. Carpenter from a single pattern field to multiple pattern channels. Whereas the original ART models perform unsupervised learning of recognition nodes in response to incoming input patterns, fusion ART learns multi-channel mappings simultaneously across multi-modal pattern channels in an online and incremental manner. The learning modelFusion ART employs a multi-channel architecture (as shown below), comprising a category field connected to a fixed number of (K) pattern channels or input fields through bidirectional conditionable pathways. The model unifies a number of network designs, most notably Adaptive Resonance Theory (ART), Adaptive Resonance Associative Map (ARAM)[7] and Fusion Architecture for Learning and COgNition (FALCON),[8] developed over the past decades for a wide range of functions and applications. Given a set of multimodal patterns, each presented at a pattern channel, the fusion ART pattern encoding cycle comprises five key stages, namely code activation, code competition, activity readout, template matching, and template learning, as described below.
Types of fusion ARTThe network dynamics described above can be used to support numerous learning operations. We show how fusion ART can be used for a variety of traditionally distinct learning tasks in the subsequent sections. Original ART modelsWith a single pattern channel, the fusion ART architecture reduces to the original ART model. Using a selected vigilance value $\\rho$, an ART model learns a set of recognition nodes in response to an incoming stream of input patterns in a continuous manner. Each recognition node in the field learns to encode a template pattern representing the key characteristics of a set of patterns. ART has been widely used in the context of unsupervised learning for discovering pattern groupings. Please refer to the selected ART literatures[4][6] for a review of ART's functionalities, interpretations, and applications. Adaptive resonance associative mapBy synchronizing pattern coding across multiple pattern channels, fusion ART learns to encode associative mappings across distinct pattern spaces. A specific instance of fusion ART with two pattern channels is known as adaptive resonance associative map (ARAM), that learns multi-dimensional supervised mappings from one pattern space to another pattern space. An ARAM system consists of an input field , an output field , and a category field . Given a set of feature vectors presented at with their corresponding class vectors presented at , ARAM learns a predictive model (encoded by the recognition nodes in ) that associates combinations of key features to their respective classes. Fuzzy ARAM, based on fuzzy ART operations, has been successfully applied to numerous machine learning tasks, including personal profiling,[9] document classification,[10] personalized content management,[11] and DNA gene expression analysis.[12] In many benchmark experiments, ARAM has demonstrated predictive performance superior to those of many state-of-the-art machine learning systems, including C4.5, Backpropagation Neural Network, K Nearest Neighbour, and Support Vector Machines. Fusion ART with domain knowledgeDuring learning, fusion ART formulates recognition categories of input patterns across multiple channels. The knowledge that fusion ART discovers during learning, is compatible with symbolic rule-based representation. Specifically, the recognition categories learned by the category nodes are compatible with a class of IF-THEN rules that maps a set of input attributes (antecedents) in one pattern channel to a disjoint set of output attributes (consequents) in another channel. Due to this compatibility, at any point of the incremental learning process, instructions in the form of IF-THEN rules can be readily translated into the recognition categories of a fusion ART system. The rules are conjunctive in the sense that the attributes in the IF clause and in the THEN clause have an AND relationship. Augmenting a fusion ART network with domain knowledge through explicit instructions serves to improve learning efficiency and predictive accuracy. The fusion ART rule insertion strategy is similar to that used in Cascade ARTMAP, a generalization of ARTMAP that performs domain knowledge insertion, refinement, and extraction.[13] For direct knowledge insertion, the IF and THEN clauses of each instruction (rule) is translated into a pair of vectors A and B respectively. The vector pairs derived are then used as training patterns for inserting into a fusion ART network. During rule insertion, the vigilance parameters are set to 1s to ensure that each distinct rule is encoded by one category node. For details on integrating domain knowledge into fusion ART, please refer to a recent paper.[14] Fusion architecture for learning and cognition (FALCON)Reinforcement learning is a paradigm wherein an autonomous system learns to adjust its behaviour based on reinforcement signals received from the environment. An instance of fusion ART, known as FALCON (fusion architecture for learning and cognition), learns mappings simultaneously across multi-modal input patterns, involving states, actions, and rewards, in an online and incremental manner. Compared with other ART-based reinforcement learning systems, FALCON presents a truly integrated solution in the sense that there is no implementation of a separate reinforcement learning module or Q-value table. Using competitive coding as the underlying principle of computation, the network dynamics encompasses several learning paradigms, including unsupervised learning, supervised learning, as well as reinforcement learning. FALCON employs a three-channel architecture, comprising a category field and three pattern fields, namely a sensory field for representing current states, a motor field for representing actions, and a feedback field for representing reward values. A class of FALCON networks, known as TD-FALCON,[8] incorporates Temporal Difference (TD) methods to estimate and learn value function Q(s,a), that indicates the goodness to take a certain action a in a given state s. The general sense-act-learn algorithm for TD-FALCON is summarized. Given the current state s, the FALCON network is used to predict the value of performing each available action a in the action set A based on the corresponding state vector and action vector . The value functions are then processed by an action selection strategy (also known as policy) to select an action. Upon receiving a feedback (if any) from the environment after performing the action, a TD formula is used to compute a new estimate of the Q-value for performing the chosen action in the current state. The new Q-value is then used as the teaching signal (represented as reward vector R) for FALCON to learn the association of the current state and the chosen action to the estimated value. References1. ^Y.R. Asfour, G.A. Carpenter, S. Grossberg, and G.W. Lesher. (1993) Fusion ARTMAP: an adaptive fuzzy network for multi-channel classification. In Proceedings of the Third International Conference on Industrial Fuzzy Control and Intelligent Systems (IFIS). 2. ^R.F. Harrison and J.M. Borges. (1995) Fusion ARTMAP: Clarification, Implementation and Developments. Research Report No. 589, Department of Automatic Control and Systems Engineering, The University of Sheffield. 3. ^Tan, A.-H., Carpenter, G. A. & Grossberg, S. (2007) Intelligence Through Interaction: Towards A Unified Theory for Learning . In proceedings, D. Liu et al. (Eds.): International Symposium on Neural Networks (ISNN'07), LNCS 4491, Part I, pp. 1098-1107. 4. ^1 2 Carpenter, G.A. & Grossberg, S. (2003), Adaptive Resonance Theory, In Michael A. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, Second Edition (pp. 87-90). Cambridge, MA: MIT Press 5. ^Wang, W.-W. & Tan, A.-H. (2016) Semantic Memory Modelling and Memory Interaction in Learning Agents. IEEE Transactions on Systems, Man and Cybernetics: Systems, in press. 6. ^1 Grossberg, S. (1987), Competitive learning: From interactive activation to adaptive resonance, Cognitive Science (Publication), 11, 23-63 7. ^{{cite journal|date=1995|title=Adaptive Resonance Associative Map|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/ARAM%20NN95.pdf|journal=Neural Networks|volume=8|issue=3|pages=437–446|last1=Tan|first1=A.-H.|doi=10.1016/0893-6080(94)00092-z}} 8. ^1 {{cite journal|date=2008|title=Integrating Temporal Difference Methods and Self-Organizing Neural Networks for Reinforcement Learning with Delayed Evaluative Feedback|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/2008/TD-FALCON%20TNN%2004359212.pdf|journal=IEEE Transactions on Neural Networks|volume=9|issue=2|pages=230–244|author1=Tan, A.-H., Lu, N. |author2=Xiao, D }} 9. ^{{cite journal | last1 = Tan | first1 = A.-H. | last2 = Soon | first2 = H.-S. | year = 2000 | title = | url = | journal = proceedings, Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'00), LNAI | volume = 1805 | issue = | pages = 173–176 }} 10. ^{{cite journal | last1 = He | first1 = J. | last2 = Tan | first2 = A.-H. | last3 = Tan | first3 = C.-L. | year = 2003 | title = On Machine Learning Methods for Chinese Document Classification | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/TC%20APIN03.pdf | format = PDF | journal = Applied Intelligence | volume = 18 | issue = 3| pages = 311–322 }} 11. ^{{cite journal | last1 = Tan | first1 = A.-H. | last2 = Ong | first2 = H.-L. | last3 = Pan | first3 = H. | last4 = Ng | first4 = J. | last5 = Li | first5 = Q.-X. | year = 2004 | title = Towards Personalized Web Intelligence | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/TC%20APIN03.pdf | format = PDF | journal = Knowledge and Information Systems | volume = 6 | issue = 5| pages = 595–616 | doi=10.1007/s10115-003-0130-9}} 12. ^{{cite journal | last1 = Tan | first1 = A.-H. | last2 = Pan | first2 = | year = 2005 | title = Predictive Neural Networks for Gene Expression Data Analysis | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/Predictive%20NN%20for%20Gene%20Expression%20Analysis.pdf | format = PDF | journal = Neural Networks | volume = 18 | issue = 3| pages = 297–306 | doi=10.1016/j.neunet.2005.01.003}} 13. ^{{Cite journal|last=Tan|first=A.-H.|date=1997|title=Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/Cascade%20ARTMAP-TNN97.pdf|journal=IEEE Transactions on Neural Networks|doi=|pmid=|access-date=}} 14. ^{{cite journal | last1 = Teng | first1 = T.-H. | last2 = Tan | first2 = A.-H. | last3 = Zurada | first3 = J. | year = 2015 | title = Self-Organizing Neural Networks Integrating Domain Knowledge and Reinforcement Learning | url = http://www.ntu.edu.sg/home/asahtan/Papers/2014/Self-Organizing%20Neural%20Network%20Integrating%20Domain%20Knowledge%20and%20Reinforcement%20Learning%20-%20TNNLS%202014.pdf | format = PDF | journal = IEEE Transactions on Neural Networks and Learning Systems | volume = 26 | issue = 5| pages = 889–902 | doi=10.1109/tnnls.2014.2327636}} 1 : Theories |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。