请输入您要查询的百科知识:

 

词条 Long short-term memory
释义

  1. History

  2. Idea

  3. Architecture

  4. Variants

      LSTM with a forget gate    Variables    Activation functions    Peephole LSTM    Peephole convolutional LSTM  

  5. Training

      CTC score function    Alternatives    Success  

  6. Applications

  7. See also

  8. References

  9. External links

{{Machine learning bar}}Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture[1] used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a Turing machine can).[1] It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition[2] or speech recognition.[3][4]Bloomberg Business Week wrote: "These powers make LSTM arguably the most commercial AI achievement, used for everything from predicting diseases to composing music."[5]

A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.

LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the exploding and vanishing gradient problems that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications {{Citation needed|date=October 2017}}.

History

LSTM was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber.[6] By introducing Constant Error Carousel (CEC) units, LSTM deals with the exploding and vanishing gradient problems. The initial version of LSTM block included cells, input and output gates.[8]

In 1999, Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins introduced the forget gate (also called “keep gate”) into LSTM architecture,[7]

enabling the LSTM to reset its own state.[8]

In 2000, Gers & Schmidhuber & Cummins added peephole connections (connections from the cell to the gates) into the architecture.[11] Additionally, the output activation function was omitted.[8]

In 2014, Kyunghyun Cho et al. put forward a simplified variant called Gated recurrent unit (GRU).[8]

Among other successes, LSTM achieved record results in natural language text compression,[9] unsegmented connected handwriting recognition[10] and won the ICDAR handwriting competition (2009). LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset (2013).[11]

As of 2016, major technology companies including Google, Apple, and Microsoft were using LSTM as fundamental components in new products.[12] For example, Google used LSTM for speech recognition on the smartphone,[13][14] for the smart assistant Allo[15] and for Google Translate.[16][17] Apple uses LSTM for the "Quicktype" function on the iPhone[18][19] and for Siri.[20] Amazon uses LSTM for Amazon Alexa.[21]

In 2017, Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[22]

In 2017, researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[23][24][25] Their study describes a novel neural network that performs better on certain data sets than the widely used long short-term memory neural network.

Further in 2017 Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[26]

Idea

In theory, classic (or "vanilla") RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem of vanilla RNNs is computational (or practical) in nature: when training a vanilla RNN using back-propagation, the gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity), because of the computations involved in the process, which use finite-precision numbers. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow unchanged. However, LSTM networks can still suffer from the exploding gradient problem.[27]

Architecture

There are several architectures of LSTM units. A common architecture is composed of a cell (the memory part of the LSTM unit) and three "regulators", usually called gates, of the flow of information inside the LSTM unit: an input gate, an output gate and a forget gate. Some variations of the LSTM unit do not have one or more of these gates or maybe have other gates. For example, gated recurrent units (GRUs) do not have an output gate.

Intuitively, the cell is responsible for keeping track of the dependencies between the elements in the input sequence. The input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit. The activation function of the LSTM gates is often the logistic function.

There are connections into and out of the LSTM gates, a few of which are recurrent. The weights of these connections, which need to be learned during training, determine how the gates operate.

Variants

In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one cell of one LSTM unit, but contains LSTM unit's cells.

LSTM with a forget gate

The compact forms of the equations for the forward pass of an LSTM unit with a forget gate are:[6][28]

where the initial values are and and the operator denotes the Hadamard product (element-wise product). The subscript indexes the time step.

Variables

  • : input vector to the LSTM unit
  • : forget gate's activation vector
  • : input gate's activation vector
  • : output gate's activation vector
  • : hidden state vector also known as output vector of the LSTM unit
  • : cell state vector
  • , and : weight matrices and bias vector parameters which need to be learned during training

where the superscripts and refer to the number of input features and number of hidden units, respectively.

Activation functions

  • : sigmoid function.
  • : hyperbolic tangent function.
  • : hyperbolic tangent function or, as the peephole LSTM paper[35][36] suggests, .

Peephole LSTM

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[35][36] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[30] is not used, is used instead in most places.

Peephole convolutional LSTM

Peephole convolutional LSTM.[31] The denotes the convolution operator.

Training

A RNN using LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, like gradient descent, combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.[32][33]

However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.

CTC score function

Many applications use stacks of LSTM RNNs[34] and train them by connectionist temporal classification (CTC)[35] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

Alternatives

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[46] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).

Success

There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units.

In 2018, Bill Gates called it a “huge milestone in advancing artificial intelligence” when bots developed by OpenAI were able to beat humans in the game of Dota 2.[36] OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads.[36]

In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[37]

In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft.[38] This was viewed as significant progress towards Artificial General Intelligence.[38]

Applications

Applications of LSTM include:

  • Robot control[39]
  • Time series prediction[40]
  • Speech recognition[41][42][43]
  • Rhythm learning[44]
  • Music composition[45]
  • Grammar learning[46][47][48]
  • Handwriting recognition[49][50]
  • Human action recognition[51]
  • Sign Language Translation[52]
  • Protein Homology Detection[53]
  • Predicting subcellular localization of proteins[54]
  • Time series anomaly detection[55]
  • Several prediction tasks in the area of business process management[56]
  • Prediction in medical care pathways[57]
  • Semantic parsing[58]
  • Object Co-segmentation[59][60]

See also

  • 1 the Road
  • Recurrent neural network
  • Deep learning
  • Gated recurrent unit
  • Differentiable neural computer
  • Long-term potentiation
  • Prefrontal cortex basal ganglia working memory
  • Time series

References

1. ^{{cite book |last1=Siegelmann |first1=Hava T. |last2=Sontag |first2=Eduardo D. |title=On the Computational Power of Neural Nets |journal=ACM |date=1992 |volume=COLT '92 |pages=440–449 |doi=10.1145/130385.130432 |isbn=978-0897914970 }}
2. ^{{cite journal | last1 = Graves | first1 = A. | authorlink6 = Jürgen Schmidhuber | last2 = Liwicki | first2 = M. | last3 = Fernandez | first3 = S. | last4 = Bertolami | first4 = R. | last5 = Bunke | first5 = H. | last6 = Schmidhuber | first6 = J. | title = A Novel Connectionist System for Improved Unconstrained Handwriting Recognition | url = http://www.idsia.ch/~juergen/tpami_2008.pdf | journal = IEEE Transactions on Pattern Analysis and Machine Intelligence | volume = 31 | issue = 5| pages = 855–868 | year = 2009 | doi=10.1109/tpami.2008.137| pmid = 19299860 | citeseerx = 10.1.1.139.4502 }}
3. ^{{Cite web|url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf|title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling|last=Sak|first=Hasim|last2=Senior|first2=Andrew|date=2014|website=|archive-url=|archive-date=|dead-url=|access-date=|last3=Beaufays|first3=Francoise}}
4. ^{{cite arxiv|last=Li|first=Xiangang|last2=Wu|first2=Xihong|date=2014-10-15|title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition|eprint=1410.4281|class=cs.CL}}
5. ^{{Cite news|url=https://www.bloomberg.com/news/features/2018-05-15/google-amazon-and-facebook-owe-j-rgen-schmidhuber-a-fortune|title=Quote: These powers make LSTM arguably the most commercial AI achievement, used for everything from predicting diseases to composing music.|last=Vance|first=Ashlee|date=May 15, 2018|work=Bloomberg Business Week|access-date=2019-01-16}}
6. ^{{Cite journal | author = Sepp Hochreiter | author2 = Jürgen Schmidhuber | title = Long short-term memory | journal = Neural Computation | volume = 9 | issue = 8 | pages = 1735–1780 | year = 1997 | url = https://www.researchgate.net/publication/13853244 | doi=10.1162/neco.1997.9.8.1735 | pmid=9377276}}
7. ^{{Cite journal | author = Felix Gers | author2 = Jürgen Schmidhuber | author3 = Fred Cummins | title = Learning to Forget: Continual Prediction with LSTM | journal = Proc. ICANN'99, IEE, London | pages = 850–855 | year = 1999 | url = https://ieeexplore.ieee.org/document/818041}}
8. ^{{cite arXiv |last1=Cho |first1=Kyunghyun|last2=van Merrienboer|first2=Bart|last3=Gulcehre|first3=Caglar|last4=Bahdanau|first4=Dzmitry|last5=Bougares|first5=Fethi|last6=Schwenk|first6=Holger|last7=Bengio|first7=Yoshua|date=2014 |title=Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation |eprint=1406.1078|class=cs.CL}}
9. ^{{Cite web|url=http://www.mattmahoney.net/dc/text.html#1218|title=The Large Text Compression Benchmark|language=en-US|access-date=2017-01-13}}
10. ^{{Cite journal|last=Graves|first=A.|last2=Liwicki|first2=M.|last3=Fernández|first3=S.|last4=Bertolami|first4=R.|last5=Bunke|first5=H.|last6=Schmidhuber|first6=J.|date=May 2009|title=A Novel Connectionist System for Unconstrained Handwriting Recognition|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|volume=31|issue=5|pages=855–868|doi=10.1109/tpami.2008.137|pmid=19299860|issn=0162-8828|citeseerx=10.1.1.139.4502}}
11. ^{{cite arxiv|last=Graves|first=Alex|last2=Mohamed|first2=Abdel-rahman|last3=Hinton|first3=Geoffrey|date=2013-03-22|title=Speech Recognition with Deep Recurrent Neural Networks|eprint=1303.5778|class=cs.NE}}
12. ^{{Cite journal|url=https://www.wired.com/2016/06/apple-bringing-ai-revolution-iphone/|title=With QuickType, Apple wants to do more than guess your next text. It wants to give you an AI.|journal=WIRED|language=en-US|access-date=2016-06-16|date=2016-06-14}}
13. ^{{Cite news|url=http://googleresearch.blogspot.co.at/2015/08/the-neural-networks-behind-google-voice.html|title=The neural networks behind Google Voice transcription|last=Beaufays|first=Françoise|date=August 11, 2015|work=Research Blog|access-date=2017-06-27}}
14. ^{{Cite news|url=http://googleresearch.blogspot.co.uk/2015/09/google-voice-search-faster-and-more.html|title=Google voice search: faster and more accurate|last=Sak|first=Haşim|date=September 24, 2015|work=Research Blog|access-date=2017-06-27|last2=Senior|first2=Andrew|language=en-US|last3=Rao|first3=Kanishka|last4=Beaufays|first4=Françoise|last5=Schalkwyk|first5=Johan}}
15. ^{{Cite news|url=http://googleresearch.blogspot.co.at/2016/05/chat-smarter-with-allo.html|title=Chat Smarter with Allo|last=Khaitan|first=Pranav|date=May 18, 2016|work=Research Blog|access-date=2017-06-27}}
16. ^{{cite arxiv|last=Wu|first=Yonghui|last2=Schuster|first2=Mike|last3=Chen|first3=Zhifeng|last4=Le|first4=Quoc V.|last5=Norouzi|first5=Mohammad|last6=Macherey|first6=Wolfgang|last7=Krikun|first7=Maxim|last8=Cao|first8=Yuan|last9=Gao|first9=Qin|date=2016-09-26|title=Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation|eprint=1609.08144|class=cs.CL}}
17. ^{{Cite journal|url=https://www.wired.com/2016/09/google-claims-ai-breakthrough-machine-translation/|title=An Infusion of AI Makes Google Translate More Powerful Than Ever {{!}} WIRED|last=Metz|first=Cade|date=September 27, 2016|journal=Wired|access-date=2017-06-27}}
18. ^{{Cite web|url=https://www.theinformation.com/apples-machines-can-learn-too|title=Apple's Machines Can Learn Too|last=Efrati|first=Amir|date=June 13, 2016|website=The Information|access-date=2017-06-27}}
19. ^{{Cite news|url=http://www.zdnet.com/article/ai-big-data-and-the-iphone-heres-how-apple-plans-to-protect-your-privacy|title=iPhone, AI and big data: Here's how Apple plans to protect your privacy {{!}} ZDNet|last=Ranger|first=Steve|date=June 14, 2016|work=ZDNet|access-date=2017-06-27}}
20. ^{{Cite web|url=http://bgr.com/2016/06/13/ios-10-siri-third-party-apps/|title=iOS 10: Siri now works in third-party apps, comes with extra AI features|last=Smith|first=Chris|date=2016-06-13|website=BGR|access-date=2017-06-27}}
21. ^{{Cite web|url=http://www.allthingsdistributed.com/2016/11/amazon-ai-and-alexa-for-all-aws-apps.html|title=Bringing the Magic of Amazon AI and Alexa to Apps on AWS. - All Things Distributed|last=Vogels|first=Werner|date=30 November 2016|website=www.allthingsdistributed.com|access-date=2017-06-27}}
22. ^{{Cite web|url=https://www.theverge.com/2017/8/4/16093872/facebook-ai-translations-artificial-intelligence|title=Facebook's translations are now powered completely by AI|last=Ong|first=Thuy|date=4 August 2017|website=www.allthingsdistributed.com|access-date=2019-02-15}}
23. ^{{cite web|url=http://biometrics.cse.msu.edu/Publications/MachineLearning/Baytasetal_PatientSubtypingViaTimeAwareLSTMNetworks.pdf | title= Patient Subtyping via Time-Aware LSTM Networks |website= msu.edu |accessdate= 21 Nov 2018}}
24. ^{{cite web |url= http://www.kdd.org/kdd2017/papers/view/patient-subtyping-via-time-aware-lstm-networks |title= Patient Subtyping via Time-Aware LSTM Networks |website= Kdd.org |accessdate= 24 May 2018}}
25. ^{{cite web |url= http://www.kdd.org |title=SIGKDD |website= Kdd.org |accessdate= 24 May 2018}}
26. ^{{Cite web|url=http://newatlas.com/microsoft-speech-recognition-equals-humans/50999|title=Microsoft's speech recognition system is now as good as a human|last=Haridy|first=Rich|date=August 21, 2017|website=newatlas.com|access-date=2017-08-27}}
27. ^{{cite web |last1=bro |first1=n |title=Why can RNNs with LSTM units also suffer from "exploding gradients"? |url=https://stats.stackexchange.com/q/320919/82135 |website=Cross Validated |accessdate=25 December 2018}}
28. ^{{Cite journal | author = Felix A. Gers | author2 = Jürgen Schmidhuber | author3 = Fred Cummins | title = Learning to Forget: Continual Prediction with LSTM | journal = Neural Computation | volume = 12 | issue = 10 | pages = 2451–2471 | year = 2000 | doi=10.1162/089976600300015015| citeseerx = 10.1.1.55.5709 }}
29. ^{{Cite journal|author1=Klaus Greff |author2=Rupesh Kumar Srivastava |author3=Jan Koutník |author4=Bas R. Steunebrink |author5=Jürgen Schmidhuber |arxiv=1503.04069 |title=LSTM: A Search Space Odyssey |journal=IEEE Transactions on Neural Networks and Learning Systems |volume=28 |issue=10 |pages=2222–2232 |date=2015 |doi=10.1109/TNNLS.2016.2582924 |pmid=27411231 }}
30. ^{{Cite journal|last=Gers|first=F. A.|last2=Schmidhuber|first2=E.|date=November 2001|title=LSTM recurrent networks learn simple context-free and context-sensitive languages|url=ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf|journal=IEEE Transactions on Neural Networks|volume=12|issue=6|pages=1333–1340|doi=10.1109/72.963769|pmid=18249962|issn=1045-9227|via=}}
31. ^{{Cite journal | author = Xingjian Shi | author2 = Zhourong Chen | author3 = Hao Wang | author4 = Dit-Yan Yeung | author5 = Wai-kin Wong | author6 = Wang-chun Woo | title = Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting | journal = Proceedings of the 28th International Conference on Neural Information Processing Systems | pages = 802–810 | year = 2015 | arxiv = 1506.04214 | bibcode = 2015arXiv150604214S }}
32. ^S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f. Informatik, Technische Univ. Munich, 1991.
33. ^{{Cite book|chapterurl=https://www.researchgate.net/publication/2839938|chapter=Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)|last=Hochreiter|first=S.|first2=Y. |last2=Bengio|first3=P. |last3=Frasconi |first4=J. |last4=Schmidhuber|editor-first1=S. C. |editor-last1=Kremer and |editor-first2=J. F. |editor-last2=Kolen |title=A Field Guide to Dynamical Recurrent Neural Networks.|date=2001|publisher=IEEE Press}}
34. ^{{Cite journal |last=Fernández |first=Santiago |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |date=2007 |title=Sequence labelling in structured domains with hierarchical recurrent neural networks |citeseerx=10.1.1.79.1887 |journal=Proc. 20th Int. Joint Conf. On Artificial In℡ligence, Ijcai 2007 |pages=774–779}}
35. ^{{Cite journal |last=Graves |first=Alex |last2=Fernández |first2=Santiago |last3=Gomez |first3=Faustino |date=2006 |title=Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks |citeseerx=10.1.1.75.6306 |journal=In Proceedings of the International Conference on Machine Learning, ICML 2006 |pages=369–376}}
36. ^{{Cite news|url=https://towardsdatascience.com/the-science-behind-openai-five-that-just-produced-one-of-the-greatest-breakthrough-in-the-history-b045bcdc2b69|title=The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI|last=Rodriguez|first=Jesus|date=July 2, 2018|work=Towards Data Science|access-date=2019-01-15}}
37. ^{{Cite news|url=https://blog.openai.com/learning-dexterity/|title=Learning Dexterity|date=July 30, 2018|work=OpenAI Blog|access-date=2019-01-15}}
38. ^{{Cite news|url=https://medium.com/mlmemoirs/deepminds-ai-alphastar-showcases-significant-progress-towards-agi-93810c94fbe9|title=DeepMind’s AI, AlphaStar Showcases Significant Progress Towards AGI|last=Stanford|first=Stacy|date=January 25, 2019|work=Medium ML Memoirs|access-date=2019-01-15}}
39. ^{{Cite book|last=Mayer|first=H.|last2=Gomez|first2=F.|last3=Wierstra|first3=D.|last4=Nagy|first4=I.|last5=Knoll|first5=A.|last6=Schmidhuber|first6=J.|date=October 2006|title=A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks|journal=2006 IEEE/RSJ International Conference on Intelligent Robots and Systems|pages=543–548|doi=10.1109/IROS.2006.282190|isbn=978-1-4244-0258-8|citeseerx=10.1.1.218.3399}}
40. ^{{Cite journal|last=Wierstra|first=Daan|last2=Schmidhuber|first2=J.|last3=Gomez|first3=F. J.|date=2005|title=Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning|url=https://www.academia.edu/5830256|journal=Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh|volume=|pages=853–858|via=}}
41. ^{{cite journal | last1 = Graves | first1 = A. | last2 = Schmidhuber | first2 = J. | year = 2005 | title = Framewise phoneme classification with bidirectional LSTM and other neural network architectures | url = | journal = Neural Networks | volume = 18 | issue = 5–6| pages = 602–610 | doi=10.1016/j.neunet.2005.06.042| pmid = 16112549 | citeseerx = 10.1.1.331.5800 }}
42. ^{{Cite book|last=Fernández|first=Santiago|last2=Graves|first2=Alex|last3=Schmidhuber|first3=Jürgen|date=2007|title=An Application of Recurrent Neural Networks to Discriminative Keyword Spotting|url=http://dl.acm.org/citation.cfm?id=1778066.1778092|journal=Proceedings of the 17th International Conference on Artificial Neural Networks|series=ICANN'07|location=Berlin, Heidelberg|publisher=Springer-Verlag|pages=220–229|isbn=978-3540746935}}
43. ^{{cite journal|last2=Mohamed|first2=Abdel-rahman|last3=Hinton|first3=Geoffrey|date=2013|title=Speech Recognition with Deep Recurrent Neural Networks|journal=Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on|pages=6645–6649|last1=Graves|first1=Alex}}
44. ^{{cite journal | last1 = Gers | first1 = F. | last2 = Schraudolph | first2 = N. | last3 = Schmidhuber | first3 = J. | year = 2002 | title = Learning precise timing with LSTM recurrent networks | url = http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf | journal = Journal of Machine Learning Research | volume = 3 | issue = | pages = 115–143 }}
45. ^{{Cite book|last=Eck|first=Douglas|last2=Schmidhuber|first2=Jürgen|date=2002-08-28|title=Learning the Long-Term Structure of the Blues|journal=Artificial Neural Networks — ICANN 2002|volume=2415|publisher=Springer, Berlin, Heidelberg|pages=284–289|doi=10.1007/3-540-46084-5_47|isbn=978-3540460848|series=Lecture Notes in Computer Science|citeseerx=10.1.1.116.3620}}
46. ^{{cite journal | last1 = Schmidhuber | first1 = J. | last2 = Gers | first2 = F. | last3 = Eck | first3 = D. | last4 = Schmidhuber | first4 = J. | last5 = Gers | first5 = F. | year = 2002 | title = Learning nonregular languages: A comparison of simple recurrent networks and LSTM | url = | journal = Neural Computation | volume = 14 | issue = 9| pages = 2039–2041 | doi=10.1162/089976602320263980| pmid = 12184841 | citeseerx = 10.1.1.11.7369 }}
47. ^{{cite journal | last1 = Gers | first1 = F. A. | last2 = Schmidhuber | first2 = J. | year = 2001 | title = LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages | url = ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf | journal = IEEE Transactions on Neural Networks | volume = 12 | issue = 6| pages = 1333–1340 | doi=10.1109/72.963769| pmid = 18249962 }}
48. ^{{cite journal | last1 = Perez-Ortiz | first1 = J. A. | last2 = Gers | first2 = F. A. | last3 = Eck | first3 = D. | last4 = Schmidhuber | first4 = J. | year = 2003 | title = Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets | url = | journal = Neural Networks | volume = 16 | issue = 2| pages = 241–250 | doi=10.1016/s0893-6080(02)00219-8| pmid = 12628609 | citeseerx = 10.1.1.381.1992 }}
49. ^A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
50. ^{{Cite book|last=Graves|first=Alex|last2=Fernández|first2=Santiago|last3=Liwicki|first3=Marcus|last4=Bunke|first4=Horst|last5=Schmidhuber|first5=Jürgen|date=2007|title=Unconstrained Online Handwriting Recognition with Recurrent Neural Networks|url=http://dl.acm.org/citation.cfm?id=2981562.2981635|journal=Proceedings of the 20th International Conference on Neural Information Processing Systems|series=NIPS'07|location=USA|publisher=Curran Associates Inc.|pages=577–584|isbn=9781605603520}}
51. ^M. Baccouche, F. Mamalet, C Wolf, C. Garcia, A. Baskurt. Sequential Deep Learning for Human Action Recognition. 2nd International Workshop on Human Behavior Understanding (HBU), A.A. Salah, B. Lepri ed. Amsterdam, Netherlands. pp. 29–39. Lecture Notes in Computer Science 7065. Springer. 2011
52. ^{{cite arXiv | last=Huang | first=Jie | last2=Zhou | first2=Wengang | last3=Zhang | first3=Qilin | last4=Li | first4=Houqiang | last5=Li | first5=Weiping | title=Video-based Sign Language Recognition without Temporal Segmentation | date=2018-01-30 | eprint=1801.10111 | class=cs.CV }}
53. ^{{Cite journal | last1 = Hochreiter | first1 = S. | last2 = Heusel | first2 = M. | last3 = Obermayer | first3 = K. | doi = 10.1093/bioinformatics/btm247 | title = Fast model-based protein homology detection without alignment | journal = Bioinformatics | volume = 23 | issue = 14 | pages = 1728–1736 | year = 2007 | pmid = 17488755| pmc = }}
54. ^{{cite journal | last1 = Thireou | first1 = T. | last2 = Reczko | first2 = M. | year = 2007 | title = Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins | url = | journal = IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) | volume = 4 | issue = 3| pages = 441–446 | doi=10.1109/tcbb.2007.1015| pmid = 17666763 }}
55. ^{{Cite journal|last=Malhotra|first=Pankaj|last2=Vig|first2=Lovekesh|last3=Shroff|first3=Gautam|last4=Agarwal|first4=Puneet|date=April 2015|title=Long Short Term Memory Networks for Anomaly Detection in Time Series|url=https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf|journal=European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015}}
56. ^{{cite book | last1 = Tax| first1 = N. | last2 = Verenich | first2 = I. | last3 = La Rosa | first3 = M. | last4 = Dumas | first4 = M. | year = 2017 | title = Predictive Business Process Monitoring with LSTM neural networks | journal = Proceedings of the International Conference on Advanced Information Systems Engineering (CAiSE) | volume = 10253 | pages = 477–492| doi=10.1007/978-3-319-59536-8_30| arxiv = 1612.02130 | series = Lecture Notes in Computer Science | isbn = 978-3-319-59535-1 }}
57. ^{{cite journal | last1 = Choi| first1 = E. | last2 = Bahadori| first2 = M.T. | last3 = Schuetz | first3 = E. | last4 = Stewart| first4 = W. | last5 = Sun| first5 = J. | year = 2016 | title = Doctor AI: Predicting Clinical Events via Recurrent Neural Networks | url = http://proceedings.mlr.press/v56/Choi16.html | journal = Proceedings of the 1st Machine Learning for Healthcare Conference | pages = 301–318| doi=| bibcode = 2015arXiv151105942C | arxiv = 1511.05942 }}
58. ^Jia, Robin; Liang, Percy (2016-06-11). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs].
59. ^{{cite journal | last=Wang | first=Le | last2=Duan | first2=Xuhuan | last3=Zhang | first3=Qilin | last4=Niu | first4=Zhenxing | last5=Hua | first5=Gang | last6=Zheng | first6=Nanning | title=Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation | journal=Sensors | volume=18 | issue=5 | date=2018-05-22 | issn=1424-8220 | doi=10.3390/s18051657 | pmid=29789447 | pmc=5982167 | page=1657 | url=https://qilin-zhang.github.io/_pages/pdfs/Segment-Tube_Spatio-Temporal_Action_Localization_in_Untrimmed_Videos_with_Per-Frame_Segmentation.pdf}}
60. ^{{cite conference | last=Duan | first=Xuhuan | last2=Wang | first2=Le | last3=Zhai | first3=Changbo | last4=Zheng | first4=Nanning | last5=Zhang | first5=Qilin | last6=Niu | first6=Zhenxing | last7=Hua | first7=Gang | title=Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation | publisher=25th IEEE International Conference on Image Processing (ICIP)| year=2018 | isbn=978-1-4799-7061-2 | doi=10.1109/icip.2018.8451692 | page=}}

External links

  • Recurrent Neural Networks with over 30 LSTM papers by Jürgen Schmidhuber's group at IDSIA
  • {{cite web |url= http://www.felixgers.de/papers/phd.pdf |work= PhD thesis |last= Gers |first= Felix |date= 2001 |title= Long Short-Term Memory in Recurrent Neural Networks }}
  • {{cite journal |url= http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf |last1= Gers |first1= Felix A. |first2= Nicol N. |last2= Schraudolph |first3= Jürgen |last3= Schmidhuber |title= Learning precise timing with LSTM recurrent networks |journal= Journal of Machine Learning Research |volume= 3 |date= Aug 2002 |pages= 115–143 }}
  • {{cite web |url= http://etd.uwc.ac.za/xmlui/handle/11394/249

|title= Data Mining, Fraud Detection and Mobile Telecommunications: Call Pattern Analysis with Unsupervised Neural Networks |hdl= 11394/249 |last= Abidogun |first= Olusola Adeniyi |work= Master's Thesis |deadurl= no |archive-date= May 22, 2012 |archive-url= https://web.archive.org/web/20120522234026/http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_init_3937_1174040706.pdf |year= 2005 }}
  • original with two chapters devoted to explaining recurrent neural networks, especially LSTM.
  • {{cite web |url= http://www.cs.umd.edu/~dmonner/papers/nn2012.pdf |title= A generalized LSTM-like training algorithm for second-order recurrent neural networks |first1= Derek D. |last1= Monner |first2= James A. |last2= Reggia |date= 2010 |quote= High-performing extension of LSTM that has been simplified to a single node type and can train arbitrary architectures }}
  • {{cite web |url= http://christianherta.de/lehre/dataScience/machineLearning/neuralNetworks/LSTM.html |work= Tutorial |title= How to implement LSTM in Python with Theano |first= Christian |last= Herta }}
  • {{github|guillaume-chevalier/LSTM-Human-Activity-Recognition| Chevalier, Guillaume. Tutorial: How to use LSTMs with TensorFlow in Python on cellphone sensor data}}
{{DEFAULTSORT:Long Short Term Memory}}

1 : Artificial neural networks

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/21 16:38:17