请输入您要查询的百科知识:

 

词条 Spiking neural network
释义

  1. Beginnings

  2. Applications

  3. Software

  4. Hardware

  5. See also

  6. References

  7. External links

{{refimprove|date=December 2018}}{{main|Artificial neural network}}

Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks.[1] In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.

In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.

Beginnings

{{refimprove section|date=December 2018}}
  • Modern artificial neural networks are usually fully connected, receiving continuous values and outputting continuous values. Although these networks have allowed us to achieve breakthroughs in many fields, they are biologically inaccurate and do not actually mimic the operation mechanism of neurons in the brain of living things.[2]
  • The first scientific model of a spiking neuron was proposed by Alan Lloyd Hodgkin and Andrew Huxley in 1952. This model describes how action potentials are initiated and propagated. Spikes, however, are not generally transmitted directly between neurons. Communication requires the exchange of chemical substances in the synaptic gap, called neurotransmitters. The complexity and variability of biological models have resulted in various neuron models, such as the integrate-and-fire (1907?), FitzHugh–Nagumo model (1961–1962) and Hindmarsh–Rose model (1984).
  • From the information theory point of view, the problem is to propose a model that explains how information is encoded and decoded by a series of trains of pulses, i.e. action potentials. Thus, one of the fundamental questions of neuroscience is to determine if neurons communicate by a rate or temporal code.[3] Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.[1]
  • A spiking neural network, which simulates neurons more closely to the actual situation, also takes into account the influence of time information. The idea is that neurons in a dynamic neural network are not activated in every iteration of propagation (as is the case in a typical multilayer perceptron network), but only when its membrane potential reaches a certain value. When a neuron is activated, it produces a signal that is passed on to other neurons, raising or lowering their membrane potential.
  • In a spiking neural network, the current level of activation of a neuron (modeled as a differential equation of some kind) is generally considered to be the current state, and an input pulse causes the current value to rise for a period of time and then gradually decline. A number of encoding schemes have emerged to interpret these output pulse sequences as an actual number, taking into account both pulse frequency and pulse interval time. With the research of neuroscience, the neural network model based on pulse generation time can be established accurately. Spike coding is adopted in this new neural network. By obtaining the exact time of pulse occurrence, this new neural network can obtain more information and stronger computing power.
  • It's important to note that Pcnn-pulse Coupled Neural Network is often confused with snn-spiking Neuron Networks. Pulse coupled neural network (PCNN) can be seen as a kind of spiking neural network (SNN), while spiking neural network (SNN) is a broader classification. It's spike coding.
  • At first glance, the SNN approach looks like a step backwards. We move from continuous output to binary output, and these pulse trainings are not very interpretable. But pulse training increases our ability to process spatiotemporal data (or real-world sensory data). Space refers to the fact that neurons are only connected to nearby neurons so that they can process input blocks separately (similar to CNN using filters). Time refers to the fact that pulse training occurs over time so that the information we lost in binary coding can be retrieved from the pulse time information. This allows us to process time data naturally without the additional complexity of a recurrent neural network (RNN). It turns out that impulse neurons are more powerful computational units than traditional artificial neurons.[4]
  • Since SNN is theoretically more powerful than second-generation networks, it's natural to wonder why they're not widely used. The main problem is SNN training. Although we have unsupervised biological learning methods, such as Hebbian learning and STDP, there is no effective supervised training method suitable for SNN that can provide better performance than the second-generation network. Since pulse training is not differentiable, we cannot use backpropagation based training methods like gradient descent. Therefore, in order to correctly use SNN to solve real-world tasks, we need to develop an efficient supervised learning method. This is a difficult task because it involves, given the biological realism of these networks, determining how the human brain learns.[5]

Applications

{{refimprove section|date=December 2018}}
  • This kind of neural network can in principle be used for information processing applications the same way as traditional artificial neural networks.[6] In addition, spiking neural networks can model the central nervous system of a virtual insect for seeking food without the prior knowledge of the environment.[7] However, due to their more realistic properties, they can also be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, the electrophysiological recordings of this circuit can be compared to the output of the corresponding spiking artificial neural network simulated on computer, determining the plausibility of the starting hypothesis.
  • In practice, there is a major difference between the theoretical power of spiking neural networks and what has been demonstrated. They have proved useful in neuroscience, but not (yet) in engineering. Some large scale neural network models have been designed that take advantage of the pulse coding found in spiking neural networks, these networks mostly rely on the principles of reservoir computing. However, the real world application of large scale spiking neural networks has been limited because the increased computational costs associated with simulating realistic neural models have not been justified by commensurate benefits in computational power. As a result, there has been little application of large scale spiking neural networks to solve computational tasks of the order and complexity that are commonly addressed using rate coded (second generation) neural networks. In addition it can be difficult to adapt second generation neural network models into real time, spiking neural networks (especially if these network algorithms are defined in discrete time). It is relatively easy to construct a spiking neural network model and observe its dynamics. It is much harder to develop a model with stable behavior that computes a specific function.{{Citation needed|date=January 2012}}

Software

{{refimprove section|date=December 2018}}

There is diverse range of application software to simulate spiking neural networks. This software can be classified according to the use of the simulation:

  • Software used primarily to simulate spiking neural networks[8] which are present in the biology to study their operation and characteristics. In this group we can find simulators such as GENESIS (the GEneral NEural SImulation System[9]) developed in James Bower's laboratory at Caltech; NEURON, mainly developed by Michael Hines, John W. Moore and Ted Carnevale in Yale University and Duke University; Brian, developed by Romain Brette and Dan Goodman at the École Normale Supérieure; and NEST developed by the NEST Initiative. This type of application software usually supports the simulation of complex neural models with a high level of detail and accuracy. However large networks usually require very time-consuming simulations.
  • Software which addresses information processing tasks to solve problems. Commercialized processing software such as BrainChip Studio are in this group. It is based on application software developed by Delorme and Thorpe in collaboration between Centre de Recherche Cerveau et Cognition and BrainChip (formally SpikeNet technology). The supervised learning software has an ability to be trained instantaneously, high accuracy, very low power and has considerable advantages over convolutional neural networks where massive datasets are not available. It's currently in commercial use in both civil and commercial surveillance applications in Europe and North America.
  • Software which provides capabilities to support the simulation of relatively complex neural models efficiently so that it can also be convenient for information processing tasks. This software can exploit biological neuron characteristics to perform computation functions and at the same time allows the study of the functionality of these neural characteristics. In this software group we can find EDLUT which has been developed in the University of Granada. This application software must be efficient enough to run fast simulations, sometimes even in real time, and at the same time it must support neural models which are detailed and biologically plausible.
  • In the brain, learning is achieved through the ability of synapses to reconfigure the strength by which they connect neurons (synaptic plasticity). In promising solid-state synapses called memristors, conductance can be finely tuned by voltage pulses and set to evolve according to a biological learning rule called spike-timing-dependent plasticity (STDP). Future neuromorphic architectures[10] will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Here we report on synapses based on ferroelectric tunnel junctions and show that STDP can be harnessed from inhomogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, we demonstrate that conductance variations can be modelled by the nucleation-dominated reversal of domains. Based on this physical model, our simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in spiking neural networks.[11]
  • Classification capabilities of spiking networks trained according to unsupervised learning methods[12] have been tested on the common benchmark datasets, such as, Iris, Wisconsin Breast Cancer or Statlog Landsat dataset (Newman et al. 1998, Bohte et al. 2002a, Belatreche et al. 2003). Various approaches to information encoding and network design have been used. For example, Bohte and coauthors (2002b) considered a 2-layer feedforward network for data clustering and classification. Based on the idea proposed in Hopfield (1995) the authors implemented models of local receptive fields combining the properties of radial basis functions (RBF) and spiking neurons to convert input signals (classified data) having a floating-pointrepresentation into a spiking representation[13].

Hardware

{{refimprove section|date=December 2018}}
  • Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture), designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[14]
  • Another implementation is the TrueNorth processor from IBM. This processor contains 5.4 billion transistors, but is designed to consume very little power, only 70 milliwatts; most processors in personal computers contain about 1.4 billion transistors and require 35 watts or more. IBM refers to the design principle behind TrueNorth as neuromorphic computing. Its primary purpose is pattern recognition; while critics say the chip isn't powerful enough, its supporters point out that this is only the first generation, and the capabilities of improved iterations will become clear.[15]
  • The first commercial implementation of a hardware-accelerated spiking neural network system, was introduced by BrainChip in September 2017. BrainChip Accelerator is an 8-lane, PCI-Express add-in card that increases the speed and accuracy of the object recognition function of BrainChip Studio software (see above) by up to six times. The processing is done by six BrainChip Accelerator cores in a field-programmable gate array (FPGA). Each core performs fast, user-defined image scaling, spike generation, and spiking neural network comparison to recognize objects. In combination with a CPU , BrainChip Accelerator can process 16 channels of video simultaneously, with an effective throughput of over 600 frames per second. The low-power characteristics of BrainChip’s spiking neural technology results in only 15 watts total consumption. It's particularly suited to aiding law enforcement and intelligence organizations to rapidly search vast amounts of video footage and identify patterns or faces. The SNN technology enables the Hardware Accelerator to work on low-resolution video and requires only a 24x24 pixel image to detect and classify faces.
  • Another hardware platform aimed at providing reconfigurable, general-purpose, real-time neural networks of spiking neurons is the Dynamic Neuromorphic Asynchronous Processor (DYNAP). DYNAP[16] uses a unique combination of slow, low-power, inhomogeneous sub-thresholds analog circuits, and fast programmable digital circuits. This allow the implementation of real-time spike-based neural processing architectures[17] in which memory and computation are co-localized, solving the von Neumann bottleneck problem and enabling real-time massively multiplexed communication of spiking events for realising massive networks. Recurrent networks, feed-forward networks, convolutional networks, attractor networks, echo-state networks, deep networks, and sensory fusion networks are few of the possibilities.[18]
  • Moreover, there is a hardware platform from Intel approving SNN. Loihi is a 60-mm chip fabricated in Intel's 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules[19]. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay product compared to conventional solvers running on a CPU isoprocess/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.[20]

See also

  • CoDi
  • Cognitive architecture
  • Cognitive map
  • Cognitive computer
  • Computational neuroscience
  • Neural coding
  • Neural correlate
  • Neural decoding
  • Neuroethology
  • Neuroinformatics
  • Models of neural computation
  • Motion perception
  • Systems neuroscience

References

1. ^{{cite journal|last1=Maass|first1=Wolfgang|title=Networks of spiking neurons: The third generation of neural network models|journal=Neural Networks|volume=10|issue=9|year=1997|pages=1659–1671|issn=0893-6080|doi=10.1016/S0893-6080(97)00011-7}}
2. ^{{Cite web | url=https://blog.csdn.net/Uwr44UOuQcNsUQb60zk2/article/details/79060595 | title=简述脉冲神经网络Snn:下一代神经网络 - 机器之心 - Csdn博客}}
3. ^{{cite book | author = Wulfram Gerstner | chapter = Spiking Neurons |editor1=Wolfgang Maass |editor2=Christopher M. Bishop | year = 2001 | isbn = 978-0-262-63221-8 | title = Pulsed Neural Networks | publisher = MIT Press | chapter-url = https://books.google.com/books?id=jEug7sJXP2MC&pg=PA3&dq=%22Pulsed+Neural+Networks%22+rate-code+neuroscience&ei=FEo0ScetL4zukgSyldy8Ag }}
4. ^{{Cite web | url=https://blog.csdn.net/Uwr44UOuQcNsUQb60zk2/article/details/79060595 | title=简述脉冲神经网络Snn:下一代神经网络 - 机器之心 - Csdn博客}}
5. ^{{Cite web | url=https://blog.csdn.net/Uwr44UOuQcNsUQb60zk2/article/details/79060595 | title=简述脉冲神经网络Snn:下一代神经网络 - 机器之心 - Csdn博客}}
6. ^{{Cite journal | last1 = Alnajjar | first1 = F. | last2 = Murase | first2 = K. | title = A simple Aplysia-like spiking neural network to generate adaptive behavior in autonomous robots | journal = Adaptive Behavior | volume = 14 | issue = 5 | pages = 306–324 | doi =10.1177/1059712308093869| year = 2008 }}
7. ^{{cite book |author1=X Zhang |author2=Z Xu |author3=C Henriquez |author4=S Ferrari | title = Spike-based indirect training of a spiking neural network-controlled virtual insect | journal = Decision and Control (CDC), IEEE | pages = 6798–6805 |date=Dec 2013 | doi = 10.1109/CDC.2013.6760966|isbn=978-1-4673-5717-3 |citeseerx=10.1.1.671.6351 }}
8. ^{{Cite journal|last=Abbott|first=LF.|last2=Nelson|first2=SB.|date=2000|title=Synaptic plasticity: taming the beast|journal=Nat Neurosci|volume=3|pages=1178–1183}}
9. ^{{Cite journal|last=Atiya|first=AF.|last2=Parlos|first2=AG.|date=2000|title=New results on recurrent network training: unifying the algorithms and accelerating convergence|journal=IEEE Trans Neural Networks|volume=11|pages= 697–709}}
10. ^Sutton RS, Barto AG (2002) Reinforcement Learning: An Introduction. Bradford Books, MIT Press, Cambridge, MA.
11. ^{{Cite journal|last=Boyn|first=S.|last2=Grollier|first2=J.|last3=Lecerf|first3=G.|date=2017-04-03|title=Learning through ferroelectric domain dynamics in solid-state synapses|journal=Nature Communications|volume=8|pages=14736|doi=10.1038/ncomms14736|pmid=28368007|pmc=5382254|bibcode=2017NatCo...814736B}}
12. ^{{Cite journal|last=Ponulak|first=F.|last2=Kasinski|first2=A.|date=2010|title=Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification and spike-shifting|journal= Neural Comput|volume= 22|issue=2|pages=467–510|pmid=19842989|doi=10.1162/neco.2009.11-08-901}}
13. ^{{Cite journal|last=Pfister|first=JP.|last2=Toyoizumi|first2=T.|last3=Barber|first3=D.|last4=Gerstner|first4=W.|date=2006|title=Optimal spike-timing dependent plasticity for precise action potential firing|journal=Neural Comput|volume=18|pages=1318–1348|bibcode=2005q.bio.....2037P|arxiv=q-bio/0502037}}
14. ^{{Cite book| last1 = Xin Jin| last2 = Furber | first2 = S. B.| authorlink2 = Steve Furber| last3 = Woods | first3 = J. V.| doi = 10.1109/IJCNN.2008.4634194| chapter = Efficient modelling of spiking neural networks on a scalable chip multiprocessor| title = 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence)| pages = 2812–2819| year = 2008| isbn = 978-1-4244-1820-6| pmid = | pmc = }}
15. ^Markoff, John, A new chip functions like a brain, IBM says, New York Times, August 8, 2014, p.B1
16. ^Sayenko DS, Vette AH, Kamibayashi K, Nakajima T, Akai M, Nakazawa K (2007) Facilitation of the soleus stretch reflex induced by electrical excitation of plantar cutaneous afferents located around the heel. Neurosci Lett 415: 294–298.
17. ^Schrauwen B, Campenhout JV (2004) Improving spikeprop: enhancements to an error-backpropagation rule for spiking neural networks. In: Proceedings of 15th ProRISC Workshop, Veldhoven, the Netherlands
18. ^{{Cite book|last=Indiveri|first=G.|last2=Corradi|first2=F.|last3=Qiao|first3=N.|date=2015-12-01|title=Neuromorphic architectures for spiking deep neural networks|url=http://ieeexplore.ieee.org/document/7409623/|journal=2015 IEEE International Electron Devices Meeting (IEDM)|pages=4.2.1–4.2.4|doi=10.1109/IEDM.2015.7409623|isbn=978-1-4673-9894-7}}
19. ^Yamazaki T, Tanaka S (2007) A spiking network model for passage-of-time representation in the cerebellum. Eur J Neurosci 26: 2279–2292.
20. ^{{Cite journal|last=Davies|first=M.|last2=Srinivasa|first2=N.|last3=Lin|first3=TH.|date=2018-01-01|title=Loihi: A Neuromorphic Manycore Processor with On-Chip Learning|url=https://ieeexplore.ieee.org/document/8259423|journal= IEEE Micro|volume=38|pages=82–99|doi=10.1109/MM.2018.112130359}}

External links

  • Full text of the book Spiking Neuron Models. Single Neurons, Populations, Plasticity by Wulfram Gerstner and Werner M. Kistler ({{ISBN|0-521-89079-9}})

4 : Computational statistics|Artificial neural networks|Articles containing video clips|Computational neuroscience

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/11 7:05:03