词条 | Recurrent neural network |
释义 |
A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition[1] or speech recognition.[2][3] The term "recurrent neural network" is used indiscriminately to refer to two broad classes of networks with a similar general structure, where one is finite impulse and the other is infinite impulse. Both classes of networks exhibit temporal dynamic behavior.[4] A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled. Both finite impulse and infinite impulse recurrent networks can have additional stored state, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph, if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units. {{toclimit|3}}HistoryRecurrent neural networks were based on David Rumelhart's work in 1986.[5] Hopfield networks were discovered by John Hopfield in 1982. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.[6] LSTMLong short-term memory (LSTM) networks were discovered by Hochreiter and Schmidhuber in 1997 and set accuracy records in multiple applications domains.[7]Around 2007, LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications.[8] In 2009, a Connectionist Temporal Classification (CTC)-trained LSTM network was the first RNN to win pattern recognition contests when it won several competitions in connected handwriting recognition.[9][9] In 2014, the Chinese search giant Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark without using any traditional speech processing methods.[10] LSTM also improved large-vocabulary speech recognition[2][3] and text-to-speech synthesis[11] and was used in Google Android.[9][12] In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49%{{Citation needed|date=November 2016}} through CTC-trained LSTM, which was used by Google voice search.[13] LSTM broke records for improved machine translation,[14] Language Modeling[15] and Multilingual Language Processing.[16] LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning.[17] ArchitecturesRNNs come in many variants. Fully recurrentBasic RNNs are a network of neuron-like nodes organized into successive "layers", each node in a given layer is connected with a directed (one-way) connection to every other node in the next successive layer.{{Citation needed|date=January 2017}} Each node (neuron) has a time-varying real-valued activation. Each connection (synapse) has a modifiable real-valued weight. Nodes are either input nodes (receiving data from outside the network), output nodes (yielding results), or hidden nodes (that modify the data en route from input to output). For supervised learning in discrete time settings, sequences of real-valued input vectors arrive at the input nodes, one vector at a time. At any given time step, each non-input unit computes its current activation (result) as a nonlinear function of the weighted sum of the activations of all units that connect to it. Supervisor-given target activations can be supplied for some output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. In reinforcement learning settings, no teacher provides target signals. Instead a fitness function or reward function is occasionally used to evaluate the RNN's performance, which influences its input stream through output units connected to actuators that affect the environment. This might be used to play a game in which progress is measured with the number of points won. Each sequence produces an error as the sum of the deviations of all target signals from the corresponding activations computed by the network. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences. {{Anchor|Elman network|Jordan network}}Elman networks and Jordan networksAn Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of "context units" (u in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.[18] At each time step, the input is feed-forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multilayer perceptron. Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also referred to as the state layer. They have a recurrent connection to themselves.[18]Elman and Jordan networks are also known as "simple recurrent networks" (SRN).
Variables and functions
Hopfield{{Main|Hopfield network}}The Hopfield network is an RNN in which all connections are symmetric. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. It guarantees that it will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration. Bidirectional associative memory{{Main|Bidirectional associative memory}}Introduced by Bart Kosko,[21] a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications.[22] A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.[23] Echo state{{Main|Echo state network}}The echo state network (ESN) has a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series.[24] A variant for spiking neurons is known as a liquid state machine.[25] Independent RNN (IndRNN)The Independently recurrent neural network (IndRNN)[26] addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with the non-saturated nonlinear functions such as ReLU. Using skip connections, deep networks can be trained. Recursive{{Main|Recursive neural network}}A recursive neural network[27] is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation.[28][29] They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing.[30] The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree.[31] Neural history compressorThe neural history compressor is an unsupervised stack of RNNs.[32] At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level. The system effectively minimises the description length or the negative logarithm of the probability of the data.[33] Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events. It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).[32] Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.[32] A generative model partially overcame the vanishing gradient problem[34] of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.[6] Second order RNNsSecond order RNNs use higher order weights instead of the standard weights, and states can be a product. This allows a direct mapping to a finite state machine both in training, stability, and representation.[35][36] Long short-term memory is an example of this but has no such formal mappings or proof of stability. Long short-term memory{{Main|Long short-term memory}}Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget" gates.[37] LSTM prevents backpropagated errors from vanishing or exploding.[34] Instead, errors can flow backwards through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks[38] that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.[39] LSTM works even given long delays between significant events and can handle signals that mix low and high frequency components. Many applications use stacks of LSTM RNNs[40] and train them by Connectionist Temporal Classification (CTC)[41] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts.[42] Gated recurrent unit{{Main|Gated recurrent unit}}Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks introduced in 2014. They are used in the full form and several simplified variants.[43][44] Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.[45] They have fewer parameters than LSTM, as they lack an output gate.[46] Bi-directionalBi-directional RNNs use a finite sequence to predict or label each element of the sequence based on the element's past and future contexts. This is done by concatenating the outputs of two RNNs, one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM RNNs.[47][48] Continuous-timeA continuous time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming spike train. For a neuron in the network with action potential , the rate of change of activation is given by: Where:
CTRNNs have been applied to evolutionary robotics where they have been used to address vision,[49] co-operation,[50] and minimal cognitive behaviour.[51] Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions have been low-pass filtered but prior to sampling. HierarchicalHierarchical RNNs connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.[32][52] Recurrent multilayer perceptron networkGenerally, a Recurrent Multi-Layer Perceptron (RMLP) network consists of cascaded subnetworks, each of which contains multiple layers of nodes. Each of these subnetworks is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed forward connections.[53] Multiple timescales modelA multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization that depends on spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.[54][55] With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence.{{Citation needed|date=June 2017}} Neural Turing machines{{Main|Neural Turing machine}}Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources which they can interact with by attentional processes. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.[56] Differentiable neural computer{{main|Differentiable neural computer}}Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for usage of fuzzy amounts of each memory address and a record of chronology. Neural network pushdown automataNeural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analogue stacks that are differentiable and that are trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs).[57] TrainingGradient descent{{Main|Gradient descent}}Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable. Various methods for doing so were developed in the 1980s and early 1990s by Werbos, Williams, Robinson, Schmidhuber, Hochreiter, Pearlmutter and others. The standard method is called "backpropagation through time" or BPTT, and is a generalization of back-propagation for feed-forward networks.[58][59] Like that method, it is an instance of automatic differentiation in the reverse accumulation mode of Pontryagin's minimum principle. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[60][61] which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space. In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.[62][63] For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.[64] An online hybrid between BPTT and RTRL with intermediate complexity exists,[65][66] along with variants for continuous time.[67] A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[34][68] LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.[7] This problem is also solved in the independently recurrent neural network (IndRNN)[26] by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different range including long-term memory can be learned without the gradient vanishing and exploding problem. The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.[69] It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves stability of the algorithm, providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. One approach to the computation of gradient information in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.[70] It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.[71] It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.[71] Global optimization methodsTraining the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared-difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function. The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.[72][73][74] Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link.The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the mean-squared-error from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared-error. Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization. Related fields and modelsRNNs may behave chaotically. In such cases, dynamical systems theory may be used for analysis. They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. In particular, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX).[75] Libraries
ApplicationsApplications of Recurrent Neural Networks include:
References1. ^{{cite journal | last1 = Graves | first1 = A. | authorlink6 = Jürgen Schmidhuber | last2 = Liwicki | first2 = M. | last3 = Fernandez | first3 = S. | last4 = Bertolami | first4 = R. | last5 = Bunke | first5 = H. | last6 = Schmidhuber | first6 = J. | title = A Novel Connectionist System for Improved Unconstrained Handwriting Recognition | url = http://www.idsia.ch/~juergen/tpami_2008.pdf | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 31 | issue = 5| pages = 855–868 | year = 2009 | doi=10.1109/tpami.2008.137| pmid = 19299860 | citeseerx = 10.1.1.139.4502 }} 2. ^1 {{Cite web|url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf|title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling|last=Sak|first=Hasim|last2=Senior|first2=Andrew|date=2014|website=|archive-url=|archive-date=|dead-url=|access-date=|last3=Beaufays|first3=Francoise}} 3. ^1 {{cite arxiv|last=Li|first=Xiangang|last2=Wu|first2=Xihong|date=2014-10-15|title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition|eprint=1410.4281|class=cs.CL}} 4. ^{{Cite journal|last=Miljanovic|first=Milos|date=Feb-Mar 2012|title=Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction|url=http://www.ijcse.com/docs/INDJCSE12-03-01-028.pdf|journal=Indian Journal of Computer and Engineering|volume=3|issue=1|pages=|via=}} 5. ^{{Cite journal|last=Williams|first=Ronald J.|last2=Hinton|first2=Geoffrey E.|last3=Rumelhart|first3=David E.|date=October 1986|title=Learning representations by back-propagating errors|url=https://www.nature.com/articles/323533a0|journal=Nature|volume=323|issue=6088|pages=533–536|doi=10.1038/323533a0|issn=1476-4687}} 6. ^1 {{Cite book |url=ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf |title=Habilitation thesis: System modeling and optimization |last=Schmidhuber |first=Jürgen |year=1993 |authorlink=Jürgen Schmidhuber}} Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN. 7. ^1 {{Cite journal |last=Hochreiter |first=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |author-link2=Jürgen Schmidhuber |date=1997-11-01 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735}} 8. ^{{Cite book |last=Fernández |first=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |date=2007 |title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting |url=http://dl.acm.org/citation.cfm?id=1778066.1778092 |journal=Proceedings of the 17th International Conference on Artificial Neural Networks |series=ICANN'07 |location=Berlin, Heidelberg |publisher=Springer-Verlag |pages=220–229 |isbn=978-3-540-74693-5 }} 9. ^{{Cite journal |last2=Schmidhuber |first2=Jürgen |date=2009 |editor-last=Bengio |editor-first=Yoshua |title=Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks |url=https://papers.nips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks |journal=Neural Information Processing Systems (NIPS) Foundation |pages=545–552 |editor-last2=Schuurmans |editor-first2=Dale |editor-last3=Lafferty |editor-first3=John |editor-last4=Williams |editor-first4=Chris editor-K. I. |editor-last5=Culotta |editor-first5=Aron |last1=Graves |first1=Alex}} 10. ^{{cite arxiv |last=Hannun |first=Awni |last2=Case |first2=Carl |last3=Casper |first3=Jared |last4=Catanzaro |first4=Bryan |last5=Diamos |first5=Greg |last6=Elsen |first6=Erich |last7=Prenger |first7=Ryan |last8=Satheesh |first8=Sanjeev |last9=Sengupta |first9=Shubho |date=2014-12-17 |title=Deep Speech: Scaling up end-to-end speech recognition |eprint=1412.5567 |class=cs.CL}} 11. ^Bo Fan, Lijuan Wang, Frank K. Soong, and Lei Xie (2015). Photo-Real Talking Head with Deep Bidirectional LSTM. In Proceedings of ICASSP 2015. 12. ^{{Cite web |url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43266.pdf |title=Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis |last=Zen |first=Heiga |last2=Sak |first2=Hasim |date=2015 |website=Google.com |publisher=ICASSP |pages=4470–4474 |archive-url= |archive-date= |dead-url= |access-date=}} 13. ^{{Cite web |url=http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html |title=Google voice search: faster and more accurate |last=Sak |first=Haşim |last2=Senior |first2=Andrew |date=September 2015 |last3=Rao |first3=Kanishka |last4=Beaufays |first4=Françoise |last5=Schalkwyk |first5=Johan}} 14. ^{{Cite journal |last=Sutskever |first=L. |last2=Vinyals |first2=O. |last3=Le |first3=Q. |date=2014 |title=Sequence to Sequence Learning with Neural Networks |url=https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf |journal=Electronic Proceedings of the Neural Information Processing Systems Conference |volume=27 |pages=5346 |arxiv=1409.3215 |bibcode=2014arXiv1409.3215S }} 15. ^{{cite arxiv |last=Jozefowicz |first=Rafal |last2=Vinyals |first2=Oriol |last3=Schuster |first3=Mike |last4=Shazeer |first4=Noam |last5=Wu |first5=Yonghui |date=2016-02-07 |title=Exploring the Limits of Language Modeling |eprint=1602.02410 |class=cs.CL}} 16. ^{{cite arxiv |last=Gillick |first=Dan |last2=Brunk |first2=Cliff |last3=Vinyals |first3=Oriol |last4=Subramanya |first4=Amarnag |date=2015-11-30 |title=Multilingual Language Processing From Bytes |eprint=1512.00103 |class=cs.CL}} 17. ^{{cite arxiv |last=Vinyals |first=Oriol |last2=Toshev |first2=Alexander |last3=Bengio |first3=Samy |last4=Erhan |first4=Dumitru |date=2014-11-17 |title=Show and Tell: A Neural Image Caption Generator |eprint=1411.4555 |class=cs.CV }} 18. ^1 Cruse, Holk; Neural Networks as Cybernetic Systems, 2nd and revised edition 19. ^{{cite journal | last=Elman | first=Jeffrey L. | title=Finding Structure in Time | journal=Cognitive Science | year=1990 | volume=14 | issue=2 | pages=179–211 | doi=10.1016/0364-0213(90)90002-E}} 20. ^{{Cite book |last=Jordan |first=Michael I. |date=1997-01-01 |title=Serial Order: A Parallel Distributed Processing Approach |journal=Advances in Psychology |series=Neural-Network Models of Cognition |volume=121 |pages=471–495 |doi=10.1016/s0166-4115(97)80111-2 |isbn=9780444819314}} 21. ^{{cite journal |date=1988 |title=Bidirectional associative memories |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=18 |issue=1 |pages=49–60 |doi=10.1109/21.87054 |last1=Kosko |first1=B.}} 22. ^{{cite journal |last2=Chandrasekar |first2=A. |last3=Lakshmanan |first3=S. |last4=Park |first4=Ju H. |date=2 January 2015 |title=Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control |journal=Complexity |volume=20 |issue=3 |pages=39–65 |doi=10.1002/cplx.21503 |last1=Rakkiyappan |first1=R.|bibcode=2015Cmplx..20c..39R }} 23. ^{{cite book | url = {{google books |plainurl=y |id=txsjjYzFJS4C|page=336}} | page = 336 | title = Neural networks: a systematic introduction | author = Rául Rojas | publisher = Springer | isbn = 978-3-540-60505-8 | year = 1996 }} 24. ^{{Cite journal |last=Jaeger |first=Herbert |last2=Haas |first2=Harald |date=2004-04-02 |title=Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication |journal=Science |volume=304 |issue=5667 |pages=78–80 |doi=10.1126/science.1091277 |pmid=15064413 |bibcode=2004Sci...304...78J|citeseerx=10.1.1.719.2301 }} 25. ^W. Maass, T. Natschläger, and H. Markram. A fresh look at real-time computation in generic recurrent neural circuits. Technical report, Institute for Theoretical Computer Science, TU Graz, 2002. 26. ^1 {{cite journal |title=Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN |last1=Li |first1=Shuai |last2=Li |first2=Wanqing |last3=Cook |first3=Chris |last4=Zhu |first4=Ce |last5=Yanbo |first5=Gao |journal=IEEE Conference on Computer Vision and Pattern Recognition |year=2018|arxiv=1803.04831 }} 27. ^{{cite book |doi=10.1109/ICNN.1996.548916 |title=Learning task-dependent distributed representations by backpropagation through structure |url=https://pdfs.semanticscholar.org/794e/6ed81d21f1bf32a0fd3be05c44c1fa362688.pdf |last1=Goller |first1=C. |last2=Küchler |first2=A. |journal=IEEE International Conference on Neural Networks, 1996 |volume=1 |pages=347 |year=1996 |isbn=978-0-7803-3210-2|citeseerx=10.1.1.52.4759 }} 28. ^Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7. 29. ^{{cite book |first1=Andreas |last1=Griewank |first2=Andrea |last2= Walther |title=Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation |edition=Second |url={{google books |plainurl=y |id=xoiiLaRxcbEC}} |year=2008 |publisher=SIAM |isbn=978-0-89871-776-1}} 30. ^{{citation|last1=Socher|first1=Richard|last2=Lin|first2=Cliff|last3=Ng|first3=Andrew Y.|last4=Manning|first4=Christopher D.|date=|contribution=Parsing Natural Scenes and Natural Language with Recursive Neural Networks|contribution-url=http://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf|title=28th International Conference on Machine Learning (ICML 2011)|pages=|via=}} 31. ^{{cite journal |last1=Socher |first1=Richard |last2=Perelygin |first2=Alex |last3=Y. Wu |first3=Jean |last4=Chuang |first4=Jason |last5=D. Manning |first5=Christopher |last6=Y. Ng |first6=Andrew |last7=Potts |first7=Christopher |title=Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank |journal=Emnlp 2013 |url=http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf}} 32. ^1 2 3 {{cite journal | last1 = Schmidhuber | first1 = Jürgen | authorlink = Jürgen Schmidhuber | year = 1992| title = Learning complex, extended sequences using the principle of history compression | url = ftp://ftp.idsia.ch/pub/juergen/chunker.pdf | journal = Neural Computation | volume = 4 | issue = 2| pages = 234–242 | doi=10.1162/neco.1992.4.2.234}} 33. ^{{cite journal | last1 = Schmidhuber | first1 = Jürgen | authorlink = Jürgen Schmidhuber | year = 2015 | title = Deep Learning | url = http://www.scholarpedia.org/article/Deep_Learning#Fundamental_Deep_Learning_Problem_and_Unsupervised_Pre-Training_of_RNNs_and_FNNs | journal = Scholarpedia | volume = 10 | issue = 11| page = 32832 | doi=10.4249/scholarpedia.32832| bibcode = 2015SchpJ..1032832S }} 34. ^1 2 Sepp Hochreiter (1991), Untersuchungen zu dynamischen neuronalen Netzen, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber. 35. ^C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, [https://clgiles.ist.psu.edu/pubs/NC1992-recurrent-NN.pdf "Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks"], Neural Computation, 4(3), p. 393, 1992. 36. ^C.W. Omlin, C.L. Giles, "Constructing Deterministic Finite-State Automata in Recurrent Neural Networks" Journal of the ACM, 45(6), 937-972, 1996. 37. ^{{Cite journal|url=https://www.researchgate.net/publication/220320057|title=Learning Precise Timing with LSTM Recurrent Networks (PDF Download Available)|last=Gers|first=Felix|last2=Schraudolph|first2=Nicol N.|journal=Crossref Listing of Deleted Dois|volume=1|archive-url=|archive-date=|dead-url=|access-date=2017-06-13|last3=Schmidhuber|first3=Jürgen|pp=115–143|doi=10.1162/153244303768966139|year=2000}} 38. ^1 2 {{Cite journal |last=Schmidhuber |first=Jürgen |authorlink=Jürgen Schmidhuber |date=January 2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |doi=10.1016/j.neunet.2014.09.003 |pmid=25462637 |arxiv=1404.7828 }} 39. ^{{Cite book |last=Bayer |first=Justin |last2=Wierstra |first2=Daan |last3=Togelius |first3=Julian |last4=Schmidhuber |first4=Jürgen |date=2009-09-14 |title=Evolving Memory Cell Structures for Sequence Learning |journal=Artificial Neural Networks – ICANN 2009 |volume=5769 |publisher=Springer, Berlin, Heidelberg |pages=755–764 |doi=10.1007/978-3-642-04277-5_76 |series=Lecture Notes in Computer Science |isbn=978-3-642-04276-8|url=http://mediatum.ub.tum.de/doc/1289041/document.pdf }} 40. ^{{Cite journal |last=Fernández |first=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |date=2007 |title=Sequence labelling in structured domains with hierarchical recurrent neural networks |citeseerx=10.1.1.79.1887 |journal=Proc. 20th Int. Joint Conf. On Artificial In℡ligence, Ijcai 2007 |pages=774–779}} 41. ^{{Cite journal |last=Graves |first=Alex |last2=Fernández |first2=Santiago |last3=Gomez |first3=Faustino |date=2006 |title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks |citeseerx=10.1.1.75.6306 |journal=In Proceedings of the International Conference on Machine Learning, ICML 2006 |pages=369–376}} 42. ^{{Cite journal|last=Gers|first=F. A.|last2=Schmidhuber|first2=E.|date=November 2001|title=LSTM recurrent networks learn simple context-free and context-sensitive languages|url=http://ieeexplore.ieee.org/document/963769/|journal=IEEE Transactions on Neural Networks|volume=12|issue=6|pages=1333–1340|doi=10.1109/72.963769|pmid=18249962|issn=1045-9227}} 43. ^{{cite arxiv|last=Heck|first=Joel|last2=Salem|first2=Fathi M.|date=2017-01-12|title=Simplified Minimal Gated Unit Variations for Recurrent Neural Networks|eprint=1701.03452 |class=cs.NE}} 44. ^{{cite arxiv|last=Dey|first=Rahul|last2=Salem|first2=Fathi M.|date=2017-01-20|title=Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks|eprint=1701.05923 |class=cs.NE}} 45. ^{{cite arXiv |class=cs.NE |first2=Caglar |last2=Gulcehre |title=Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling |eprint=1412.3555 |last1=Chung |first1=Junyoung |last3=Cho |first3=KyungHyun |last4=Bengio |first4=Yoshua |year=2014}} 46. ^{{cite web |url=http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/ |title=Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano – WildML |date= 2015-10-27|newspaper=Wildml.com |accessdate=May 18, 2016}} 47. ^{{Cite journal |last=Graves |first=Alex |last2=Schmidhuber |first2=Jürgen |date=2005-07-01 |title=Framewise phoneme classification with bidirectional LSTM and other neural network architectures |journal=Neural Networks |series=IJCNN 2005 |volume=18 |issue=5 |pages=602–610 |doi=10.1016/j.neunet.2005.06.042|pmid=16112549 |citeseerx=10.1.1.331.5800 }} 48. ^{{Cite journal |last=Thireou |first=T. |last2=Reczko |first2=M. |date=July 2007 |title=Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins |journal=IEEE/ACM Transactions on Computational Biology and Bioinformatics |volume=4 |issue=3 |pages=441–446 |doi=10.1109/tcbb.2007.1015|pmid=17666763 }} 49. ^{{citation|last=Harvey|first=Inman|title=3rd international conference on Simulation of adaptive behavior: from animals to animats 3|year=1994|author2=Husbands, P.|author3=Cliff, D.|pages=392–401|contribution=Seeing the light: Artificial evolution, real vision|contribution-url=https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_Evolution_Real_Vision}} 50. ^{{cite book |last=Quinn |first=Matthew |title=Evolving communication without dedicated communication channels |journal=Advances in Artificial Life |year=2001 |pages=357–366 |doi=10.1007/3-540-44811-X_38 |series=Lecture Notes in Computer Science |isbn=978-3-540-42567-0 |volume=2159|citeseerx=10.1.1.28.5890 }} 51. ^{{cite journal |last=Beer |title=The dynamics of adaptive behavior: A research program |first=R.D. |journal=Robotics and Autonomous Systems |year=1997 |pages=257–289 |doi=10.1016/S0921-8890(96)00063-2 |volume=20 |issue=2–4}} 52. ^{{Cite journal |last=Paine |first=Rainer W. |last2=Tani |first2=Jun |date=2005-09-01 |title=How Hierarchical Control Self-organizes in Artificial Adaptive Systems |journal=Adaptive Behavior |volume=13 |issue=3 |pages=211–225 |doi=10.1177/105971230501300303}} 53. ^{{cite book |citeseerx=10.1.1.45.3527 |title=Recurrent Multilayer Perceptrons for Identification and Control: The Road to Applications|year=1995}} 54. ^{{Cite journal |last=Yamashita |first=Yuichi |last2=Tani |first2=Jun |date=2008-11-07 |title=Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment |journal=PLOS Computational Biology |volume=4 |issue=11 |pages=e1000220 |doi=10.1371/journal.pcbi.1000220 |pmc=2570613 |pmid=18989398 |bibcode=2008PLSCB...4E0220Y}} 55. ^{{Cite journal |last=Shibata Alnajjar |first=Fady |last2=Yamashita |first2=Yuichi |last3=Tani |first3=Jun |date=2013 |title=The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory |journal=Frontiers in Neurorobotics |volume=7 |pages=2 |doi=10.3389/fnbot.2013.00002 |pmc=3575058 |pmid=23423881}} 56. ^{{cite arxiv |eprint=1410.5401 |title= Neural Turing Machines |last1= Graves |first1= Alex |last2= Wayne |first2= Greg |last3= Danihelka |first3= Ivo |year= 2014|class= cs.NE }} 57. ^{{Cite book |title=Adaptive Processing of Sequences and Data Structures |chapter=The Neural Network Pushdown Automaton: Architecture, Dynamics and Training |last=Sun |first=Guo-Zheng |last2=Giles |first2=C. Lee |last3=Chen |first3=Hsing-Hen |date=1998 |publisher=Springer Berlin Heidelberg |isbn=9783540643418 |editor-last=Giles |editor-first=C. Lee |series=Lecture Notes in Computer Science |pages=296–345 |doi=10.1007/bfb0054003 |editor-last2=Gori |editor-first2=Marco|citeseerx=10.1.1.56.8723 }} 58. ^{{Cite journal|last=Werbos|first=Paul J.|title=Generalization of backpropagation with application to a recurrent gas market model|url=http://linkinghub.elsevier.com/retrieve/pii/089360808890007X|journal=Neural Networks|volume=1|issue=4|pages=339–356|doi=10.1016/0893-6080(88)90007-x|year=1988}} 59. ^{{cite book|url={{google books |plainurl=y |id=Ff9iHAAACAAJ}}|title=Learning Internal Representations by Error Propagation|last=Rumelhart|first=David E.|publisher=Institute for Cognitive Science, University of California, San Diego|year=1985}} 60. ^{{cite book|url={{google books |plainurl=y |id=6JYYMwEACAAJ}}|title=The Utility Driven Dynamic Error Propagation Network. Technical Report CUED/F-INFENG/TR.1|last=Robinson|first=A. J.|publisher=University of Cambridge Department of Engineering|year=1987|isbn=|location=|pages=}} 61. ^{{cite book|url={{google books |plainurl=y |id=B71nu3LDpREC}}|title=Backpropagation: Theory, Architectures, and Applications|editor-last1=Chauvin|editor-first1=Yves|editor-last2=Rumelhart|editor-first2=David E.|first1=R. J. |last1=Williams |first2=D. |last2=Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity |date=1 February 2013|publisher=Psychology Press|isbn=978-1-134-77581-1}} 62. ^{{Cite journal|last=SCHMIDHUBER|first=JURGEN|date=1989-01-01|title=A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks|journal=Connection Science|volume=1|issue=4|pages=403–412|doi=10.1080/09540098908915650}} 63. ^{{cite book|first1=José C. |last1=Príncipe|first2=Neil R.|last2= Euliano|first3=W. Curt |last3=Lefebvre|title=Neural and adaptive systems: fundamentals through simulations|url={{google books |plainurl=y |id=jgMZAQAAIAAJ}}|year=2000|publisher=Wiley|isbn=978-0-471-35167-2}} 64. ^{{Cite arxiv|last=Yann|first=Ollivier|last2=Corentin|first2=Tallec|last3=Guillaume|first3=Charpiat|date=2015-07-28|title=Training recurrent networks online without backtracking|eprint=1507.07680|class=cs.NE}} 65. ^{{Cite journal|last=Schmidhuber|first=Jürgen|date=1992-03-01|title=A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks|journal=Neural Computation|volume=4|issue=2|pages=243–248|doi=10.1162/neco.1992.4.2.243}} 66. ^{{cite journal|first=R. J. |last=Williams |title=Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27 |location=Boston |publisher=Northeastern University, College of Computer Science |year=1989|url=http://citeseerx.ist.psu.edu/showciting?cid=128036}} 67. ^{{Cite journal|last=Pearlmutter|first=Barak A.|date=1989-06-01|title=Learning State Space Trajectories in Recurrent Neural Networks|journal=Neural Computation|volume=1|issue=2|pages=263–269|doi=10.1162/neco.1989.1.2.263|url=http://repository.cmu.edu/cgi/viewcontent.cgi?article=2865&context=compsci}} 68. ^{{cite book|chapter-url={{google books |plainurl=y |id=NWOcMVA64aAC}}|title=A Field Guide to Dynamical Recurrent Networks|last=Hochreiter|first=S.|display-authors=etal|date=15 January 2001|publisher=John Wiley & Sons|isbn=978-0-7803-5369-5|location=|pages=|chapter=Gradient flow in recurrent nets: the difficulty of learning long-term dependencies|editor-last2=Kremer|editor-first2=Stefan C.|editor-first1=John F.|editor-last1=Kolen}} 69. ^{{Cite journal|last=Campolucci|last2=Uncini|first2=A.|last3=Piazza|first3=F.|last4=Rao|first4=B. D.|year=1999|title=On-Line Learning Algorithms for Locally Recurrent Neural Networks|journal=IEEE Transactions on Neural Networks|volume=10|issue=2|pages=253–271|doi=10.1109/72.750549|pmid=18252525|citeseerx=10.1.1.33.7550}} 70. ^{{Cite journal|last1=Wan|first1=E. A.|last2=Beaufays|first2=F.|year=1996|title=Diagrammatic derivation of gradient algorithms for neural networks|url=|journal=Neural Computation|volume=8|pages=182–201|doi=10.1162/neco.1996.8.1.182}} 71. ^1 {{Cite journal|last1=Campolucci|first1=P.|last2=Uncini|first2=A.|last3=Piazza|first3=F.|year=2000|title=A Signal-Flow-Graph Approach to On-line Gradient Calculation|url=|journal=Neural Computation|volume=12|issue=8|pages=1901–1927|doi=10.1162/089976600300015196|citeseerx=10.1.1.212.5406}} 72. ^{{citation|title=IJCAI 99|year=1999|last1=Gomez|last2=Miikkulainen|first1=F. J.|first2=R.|contribution=Solving non-Markovian control tasks with neuroevolution|contribution-url=http://www.cs.utexas.edu/users/nn/downloads/papers/gomez.ijcai99.pdf|publisher=Morgan Kaufmann|accessdate=5 August 2017}} 73. ^{{cite web|url=http://arimaa.com/arimaa/about/Thesis/|title=Applying Genetic Algorithms to Recurrent Neural Networks for Learning Network Parameters and Architecture}} 74. ^{{Cite journal|last=Gomez|first=Faustino|last2=Schmidhuber|first2=Jürgen|last3=Miikkulainen|first3=Risto|date=June 2008|title=Accelerated Neural Evolution Through Cooperatively Coevolved Synapses|url=http://dl.acm.org/citation.cfm?id=1390681.1390712|journal=J. Mach. Learn. Res.|volume=9|pages=937–965}} 75. ^{{cite book|url={{google books |plainurl=y |id=830-HAAACAAJ|page=208}}|title=Computational Capabilities of Recurrent NARX Neural Networks|last=Siegelmann|first=Hava T.|last2=Horne|first2=Bill G.|last3=Giles|first3=C. Lee|publisher=University of Maryland|year=1995|pp=208–215}} 76. ^{{cite news |url=https://www.wired.com/2016/05/google-tpu-custom-chips/ |author=Cade Metz |newspaper=Wired |date=May 18, 2016 |title=Google Built Its Very Own Chips to Power Its AI Bots}} 77. ^{{Cite book|last=Mayer|first=H.|last2=Gomez|first2=F.|last3=Wierstra|first3=D.|last4=Nagy|first4=I.|last5=Knoll|first5=A.|last6=Schmidhuber|first6=J.|date=October 2006|title=A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks|url=http://ieeexplore.ieee.org/document/4059310/|journal=2006 IEEE/RSJ International Conference on Intelligent Robots and Systems|pages=543–548|doi=10.1109/IROS.2006.282190|isbn=978-1-4244-0258-8|citeseerx=10.1.1.218.3399}} 78. ^{{Cite journal|last=Wierstra|first=Daan|last2=Schmidhuber|first2=J.|last3=Gomez|first3=F. J.|date=2005|title=Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning|url=https://www.academia.edu/5830256|journal=Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh|volume=|pages=853–858|via=}} 79. ^{{cite journal | last1 = Graves | first1 = A. | last2 = Schmidhuber | first2 = J. | year = 2005 | title = Framewise phoneme classification with bidirectional LSTM and other neural network architectures | url = | journal = Neural Networks | volume = 18 | issue = 5–6| pages = 602–610 | doi=10.1016/j.neunet.2005.06.042| pmid = 16112549 | citeseerx = 10.1.1.331.5800 }} 80. ^{{Cite book|last=Fernández|first=Santiago|last2=Graves|first2=Alex|last3=Schmidhuber|first3=Jürgen|date=2007|title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting|url=http://dl.acm.org/citation.cfm?id=1778066.1778092|journal=Proceedings of the 17th International Conference on Artificial Neural Networks|series=ICANN'07|location=Berlin, Heidelberg|publisher=Springer-Verlag|pages=220–229|isbn=978-3540746935}} 81. ^{{cite journal|last2=Mohamed|first2=Abdel-rahman|last3=Hinton|first3=Geoffrey|date=2013|title=Speech Recognition with Deep Recurrent Neural Networks|journal=Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on|pages=6645–6649|last1=Graves|first1=Alex}} 82. ^{{Cite journal|last=Malhotra|first=Pankaj|last2=Vig|first2=Lovekesh|last3=Shroff|first3=Gautam|last4=Agarwal|first4=Puneet|date=April 2015|title=Long Short Term Memory Networks for Anomaly Detection in Time Series|url=https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf|journal=European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015}} 83. ^{{cite journal | last1 = Gers | first1 = F. | last2 = Schraudolph | first2 = N. | last3 = Schmidhuber | first3 = J. | year = 2002 | title = Learning precise timing with LSTM recurrent networks | url = http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf | journal = Journal of Machine Learning Research | volume = 3 | issue = | pages = 115–143 }} 84. ^{{Cite book|last=Eck|first=Douglas|last2=Schmidhuber|first2=Jürgen|date=2002-08-28|title=Learning the Long-Term Structure of the Blues|journal=Artificial Neural Networks — ICANN 2002|volume=2415|publisher=Springer, Berlin, Heidelberg|pages=284–289|doi=10.1007/3-540-46084-5_47|isbn=978-3540460848|series=Lecture Notes in Computer Science|citeseerx=10.1.1.116.3620}} 85. ^{{cite journal | last1 = Schmidhuber | first1 = J. | last2 = Gers | first2 = F. | last3 = Eck | first3 = D. | last4 = Schmidhuber | first4 = J. | last5 = Gers | first5 = F. | year = 2002 | title = Learning nonregular languages: A comparison of simple recurrent networks and LSTM | url = | journal = Neural Computation | volume = 14 | issue = 9| pages = 2039–2041 | doi=10.1162/089976602320263980| pmid = 12184841 | citeseerx = 10.1.1.11.7369 }} 86. ^{{cite journal | last1 = Gers | first1 = F. A. | last2 = Schmidhuber | first2 = J. | year = 2001 | title = LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages | url = ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf | journal = IEEE Transactions on Neural Networks | volume = 12 | issue = 6| pages = 1333–1340 | doi=10.1109/72.963769| pmid = 18249962 }} 87. ^{{cite journal | last1 = Perez-Ortiz | first1 = J. A. | last2 = Gers | first2 = F. A. | last3 = Eck | first3 = D. | last4 = Schmidhuber | first4 = J. | year = 2003 | title = Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets | url = | journal = Neural Networks | volume = 16 | issue = 2| pages = 241–250 | doi=10.1016/s0893-6080(02)00219-8| pmid = 12628609 | citeseerx = 10.1.1.381.1992 }} 88. ^A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009. 89. ^{{Cite book|last=Graves|first=Alex|last2=Fernández|first2=Santiago|last3=Liwicki|first3=Marcus|last4=Bunke|first4=Horst|last5=Schmidhuber|first5=Jürgen|date=2007|title=Unconstrained Online Handwriting Recognition with Recurrent Neural Networks|url=http://dl.acm.org/citation.cfm?id=2981562.2981635|journal=Proceedings of the 20th International Conference on Neural Information Processing Systems|series=NIPS'07|location=USA|publisher=Curran Associates Inc.|pages=577–584|isbn=9781605603520}} 90. ^M. Baccouche, F. Mamalet, C Wolf, C. Garcia, A. Baskurt. Sequential Deep Learning for Human Action Recognition. 2nd International Workshop on Human Behavior Understanding (HBU), A.A. Salah, B. Lepri ed. Amsterdam, Netherlands. pp. 29–39. Lecture Notes in Computer Science 7065. Springer. 2011 91. ^{{Cite journal | last1 = Hochreiter | first1 = S. | last2 = Heusel | first2 = M. | last3 = Obermayer | first3 = K. | doi = 10.1093/bioinformatics/btm247 | title = Fast model-based protein homology detection without alignment | journal = Bioinformatics | volume = 23 | issue = 14 | pages = 1728–1736 | year = 2007 | pmid = 17488755| pmc = }} 92. ^{{cite journal | last1 = Thireou | first1 = T. | last2 = Reczko | first2 = M. | year = 2007 | title = Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins | url = | journal = IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) | volume = 4 | issue = 3| pages = 441–446 | doi=10.1109/tcbb.2007.1015| pmid = 17666763 }} 93. ^{{cite book | last1 = Tax| first1 = N. | last2 = Verenich | first2 = I. | last3 = La Rosa | first3 = M. | last4 = Dumas | first4 = M. | year = 2017 | title = Predictive Business Process Monitoring with LSTM neural networks | journal = Proceedings of the International Conference on Advanced Information Systems Engineering (CAiSE) | volume = 10253 | pages = 477–492| doi=10.1007/978-3-319-59536-8_30| series = Lecture Notes in Computer Science | isbn = 978-3-319-59535-1 | arxiv = 1612.02130 }} 94. ^{{cite journal | last1 = Choi| first1 = E. | last2 = Bahadori| first2 = M.T. | last3 = Schuetz | first3 = E. | last4 = Stewart| first4 = W. | last5 = Sun| first5 = J. | year = 2016 | title = Doctor AI: Predicting Clinical Events via Recurrent Neural Networks | url = http://proceedings.mlr.press/v56/Choi16.html | journal = Proceedings of the 1st Machine Learning for Healthcare Conference | pages = 301–318| doi=}} Further reading
External links
2 : Artificial intelligence|Artificial neural networks |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。