请输入您要查询的百科知识:

 

词条 AI accelerator
释义

  1. History of AI acceleration

      Early attempts    Heterogeneous computing    Use of GPU    Use of FPGAs    Emergence of dedicated AI accelerator ASICs    In-memory computing architectures  

  2. Nomenclature

  3. Examples

     Stand alone products   GPU based products    AI accelerating co-processors   Research and unreleased products 

  4. Potential applications

  5. See also

  6. References

  7. External links

{{Use American English|date = January 2019}}{{Short description|Hardware acceleration unit for artificial intelligence tasks}}{{Use mdy dates|date = January 2019}}

An AI accelerator is a class of microprocessor[1] or computer system[2] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks.[3] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability.[4] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design. AI accelerators can be found in many devices such as smartphones, tablets, and computers all around the world. See the heading titled ¨Examples" for more examples.

History of AI acceleration

Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts

As early as 1993, digital signal processors were used as neural network accelerators e.g. to accelerate optical character recognition software.[4] In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[5][6][7] FPGA-based accelerators were also first explored in the 1990s for both inference[8] and training.[9] ANNA was a neural net CMOS accelerator developed by Yann LeCun.[10]

Heterogeneous computing

Heterogeneous computing refers to incorporating a number of specialized processors in a single system, or even a single chip, each optimized for a specific type of task. Architectures such as the cell microprocessor[11] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing 'throughput' over latency. The Cell microprocessor was subsequently applied to a number of tasks[12][13][14] including AI.[15][16][17]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low precision data types.[18]

Use of GPU

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[19][20][21] {{As of|2016}}, GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training[22] and inference in devices such as self-driving cars.[23] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from.[24] As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks.[25][26] Tensor cores are intended to speed up the training of neural networks.[26]

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks and software alongside each other.[8][9][27]

Microsoft has used FPGA chips to accelerate inference.[28][29] The application of FPGAs to AI acceleration motivated Intel to acquire Altera with the aim of integrating FPGAs in server CPUs, which would be capable of accelerating AI as well as general purpose tasks.[30]

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better{{Quantify|date=October 2018}} than CPUs for AI related tasks, a factor of up to 10 in efficiency[31][32] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).{{citation needed|date=November 2017}} These accelerators employ strategies such as optimized memory use{{citation needed|date=November 2017}} and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[33][34] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[35][36][37][38][39][40][41]

In-memory computing architectures

{{Expand section|date=October 2018}}

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[42] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[43] The system is based on phase-change memory arrays.[44]

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[45] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

Examples

{{example farm|date=November 2017}}

Stand alone products

  • Google Tensor processing unit is an accelerator specifically designed by Google for its TensorFlow framework, which is extensively used for convolutional neural networks. It focuses on a high volume of 8-bit precision arithmetic. The initial first generation from 2015 focused on inference, while the second generation announced in May 2017 increased capability for neural network training also. The third-generation TPU was announced on 8 May 2018. On July 2018 the Edge TPU was announced. Edge TPU is Google’s purpose-built ASIC chip designed to run its TensorFlow Lite machine learning (ML) models at the edge.[46]
  • Adapteva epiphany is a many-core coprocessor featuring a network on a chip scratchpad memory model, suitable for a dataflow programming model, which should be suitable for many machine learning tasks.{{citation needed|date=November 2017}}
  • Intel Nervana NNP (Neural Network Processor) (a.k.a. ”Lake Crest”), which Intel claims is the first commercially available chip with a purpose built architecture for deep learning. Facebook was a partner in the design process.[47][48]
  • Movidius Myriad 2 is a many-core VLIW AI accelerator complemented with video fixed function units.
  • Mobileye's EyeQ is a processor specialized for vision processing for self-driving cars[49]
  • NM500 is the latest as of 2016 in a series of accelerator chips for radial basis function neural nets from General Vision.[50]

GPU based products

  • Nvidia Tesla is Nvidia's line of GPU derived products marketed for GPGPU and AI tasks.
    • Nvidia Volta is a microarchitecture which augments the Graphics processing unit with additional 'tensor units' targeted specifically at accelerating calculations for neural networks[51]
    • Nvidia GeForce 20 series is the first series based on the Turing microarchitecture and features built in "Tensor Cores".[52]
    • Nvidia DGX-1 is a Nvidia workstation/server product which incorporates Nvidia brand GPUs for GPGPU tasks including machine learning.[53]
    • Nvidia Tegra Xavier SoC features their Deep Learning Accelerator (DLA) and Programmable Vision Accelerator (PVA).[54]
  • Radeon Instinct is AMD's line of GPU derived products for AI acceleration.[55]
  • Qualcomm's Adreno GPUs since the Snapdragon 820 released in March 2015 using their Qualcomm Snapdragon Neural Processing Engine SDK.[56]
  • NEC SX-Aurora TSUBASA is NEC's product line for AI applications and machine learning.[57][58]

AI accelerating co-processors

  • Qualcomm's Hexagon DSPs since the Snapdragon 820 released in March 2015 using their Qualcomm Snapdragon Neural Processing Engine SDK.[56]
    • Qualcomm's Snapdragon 855 contains their 4th generation on-device AI engine, including a dedicated Tensor Accelerator.
  • Cadence's Tensilica IP is a family of neural network processor and neural network-optimized digital signal processor IP core. Such as the Tensilica Vision C5 DSP released in May 2017 and Tensilica Vision Q6 DSP released in April 2018.[59][60] The Tensilica DNA 100 Processor was announced in September 2018.[61]
  • Imagination Technologies' PowerVR 2NX NNA (Neural Net Accelerator) is an IP core from NEC (now Renesas) licensed for integration into chips, first announced September 2017.[62] On December 2018 PowerVR 3NX NNA was announced.[63]
  • Apple's Neural Engine is an AI accelerator core within Apple-designed processors. The Apple A11 Bionic SoC[64] released on September 2017 featured a dual core Neural Engine. The Apple A12 Bionic SoC released on September 2018 featured an octa core Neural Engine.
  • Samsung's Exynos 9820 has an integrated Neural Processing Unit (NPU). It allows the processor to perform AI-related functions seven times faster than its predecessor. From enhancing photos to advanced AR features, the Exynos 9820 with NPU expands AI capabilities of mobile devices.[65]
  • Cambricon Technologies's Machine Learning Unit (MLU) family of neural processors such as the MLU-100 and MLU-200.[66]
  • HiSilicon's Neural Processing Unit is a neural network accelerator within HiSilicon's Kirin SoCs. The Kirin 970[67] with a NPU from Cambricon Technologies was released in October, 2017. The Kirin 980 with a dual core NPU from Cambricon Technologies was released in October, 2018.
  • Google's Pixel Visual Core (PVC) is a fully programmable Image, Vision and AI processor for mobile devices. First featured in the Google Pixel 2 released in October, 2017.
  • Arm's ML Processor is dedicated IP for neural network model inferencing acceleration. First announced as Project Trillium in January 2018.[68]
  • CEVA's NeuPro family of AI processors. The NP500, NP1000, NP2000 and NP4000 were first announced on January 2018. Each containing one programmable vector DSP and one hardwired implementation of 8-bit or 16-bit neural network layers supporting neural nets with performances ranging from 2 TOPS thru 12.5 TOPS.[69]
  • Universal Multifunction Accelerator (UMA) by Manjeera Digital Systems in Hyderabad is an accelerator in a proprietary architecture based on Middle Stratum Operations.[70][71][72]

Research and unreleased products

  • In December 2017 Tesla Motors confirmed a rumour that it is developing an AI chip for autonomous driving. Jim Keller worked on this project between at least early 2016 and early 2018.[73]
  • MIT Eyeriss is an accelerator design aimed explicitly at convolutional neural networks, using a scratchpad memory and network-on-chip architecture.[74]
  • Georgia Tech has designed a neuro-inspired processor for performing online reinforcement learning for ultra-low power robotics. It employs mixed-signal design techniques to reduce the operating power.[75]
  • Nullhop is an accelerator designed at the Institute of Neuroinformatics of ETH Zürich and University of Zürich based on sparse representation of feature maps. The second generation of the architecture is commercialized by the university spin-off Synthara Technologies.[76][77]
  • Kalray is an accelerator for convolutional neural nets.[78]
  • SpiNNaker is a many-core design specialized for simulating a large neural network.
  • Graphcore IPU is a graph-based AI accelerator.[79]
  • DPU, by Wave Computing, a dataflow architecture[80]
  • STMicroelectronics at the start of 2017 presented a demonstrator SoC manufactured in a 28 nm process containing a deep CNN accelerator.[81]
  • TrueNorth is a manycore design based on spiking neurons rather than traditional arithmetic.[82][83]
  • Intel Loihi is an experimental neuromorphic chip.[84]
  • [https://www.brainchipinc.com BrainChip] in September 2017 introduced a commercial PCI Express card with a Xilinx Kintex Ultrascale FPGA running neuromorphic neural cores applying pattern recognition on 600 video images per second using 16 watts of power.[85]
  • IIT Madras is designing a spiking neuron accelerator for big-data analytics.[86]
  • Several memristor-based AI accelerators have been proposed which leverage in-memory computing capability of memristor.[87]
  • AlphaICs is designing an agent-based coprocessor called Real AI Processor (RAP) to enable perception and decision making in a chip.[88]

Potential applications

  • Autonomous vehicles: Nvidia has targeted their Drive PX-series boards at this space.[89]
  • Military robots
  • Agricultural robots, for example pesticide-free weed control.[90]
  • Voice control, e.g. in mobile phones, a target for Qualcomm Zeroth.[91]
  • Machine translation
  • Unmanned aerial vehicles, e.g. navigation systems, e.g. the Movidius Myriad 2 has been demonstrated successfully guiding autonomous drones.[92]
  • Industrial robots, increasing the range of tasks that can be automated, by adding adaptability to variable situations.
  • Health care, to assist with diagnoses
  • Search engines, increasing the energy efficiency of data centers and ability to use increasingly advanced queries.
  • Natural language processing

See also

  • Cognitive computer
  • Neuromorphic computing
  • Physical neural network
  • Hardware acceleration

References

1. ^{{cite web|url=https://www.v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator|title=Intel unveils Movidius Compute Stick USB AI Accelerator|date=2017-07-21|access-date=August 11, 2017|archive-url=https://web.archive.org/web/20170811193632/https://www.v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator|archive-date=August 11, 2017|dead-url=yes|df=mdy-all}}
2. ^{{cite web|url=https://insidehpc.com/2017/06/inspurs-unveils-gx4-ai-accelerator/|title=Inspurs unveils GX4 AI Accelerator|date=2017-06-21}}
3. ^{{cite web|url=http://www.eetimes.com/document.asp?doc_id=1329715|title=Google Developing AI Processors|last=|first=|date=|website=|archive-url=|archive-date=|dead-url=|access-date=}}Google using its own AI accelerators.
4. ^{{cite web|title=convolutional neural network demo from 1993 featuring DSP32 accelerator|url=https://www.youtube.com/watch?v=FwFduRA_L6Q}}
5. ^{{cite web|title=design of a connectionist network supercomputer|url=http://people.eecs.berkeley.edu/~krste/papers/cns-injs1993.ps}}
6. ^{{cite web|title=The end of general purpose computers (not)|url=https://www.youtube.com/watch?v=VtJthbiiTBQ}}This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)
7. ^{{cite book|doi=10.1109/IPPS.1995.395862|title = Proceedings of 9th International Parallel Processing Symposium|pages=774–781|year = 1995|last1 = Ramacher|first1 = U.|last2=Raab|first2=W.|last3=Hachmann|first3=J.A.U.|last4=Beichter|first4=J.|last5=Bruls|first5=N.|last6=Wesseling|first6=M.|last7=Sicheneder|first7=E.|last8=Glass|first8=J.|last9=Wurz|first9=A.|last10=Manner|first10=R.|isbn=978-0-8186-7074-9|citeseerx = 10.1.1.27.6410}}
8. ^{{cite web|title=Space Efficient Neural Net Implementation|url=https://www.researchgate.net/publication/2318589}}
9. ^{{cite web|title=A Generic Building Block for Hopfield Neural Networks with On-Chip Learning|url=https://pdfs.semanticscholar.org/63fd/66ff9edb7b5342e4835286d4a2b22e1f2c04.pdf|year=1996}}
10. ^Application of the ANNA Neural Network Chip to High-Speed Character Recognition
11. ^{{cite web|title=Synergistic Processing in Cell's Multicore Architecture|url=https://www.semanticscholar.org/paper/Synergistic-Processing-in-Cell-s-Multicore-Archite-Gschwind-Hofstee/9f2a6fc20fb292a5d33eb6bd930e1de9d527ee6b|year=2006}}
12. ^{{cite journal|title=Performance of Cell processor for biomolecular simulations|journal=Computer Physics Communications|volume=176|issue=11–12|pages=660–664|arxiv=physics/0611201|doi=10.1016/j.cpc.2007.02.107|year=2007|last1=De Fabritiis|first1=G.}}
13. ^{{cite journal|title=Video Processing and Retrieval on Cell architecture|citeseerx=10.1.1.138.5133}}
14. ^{{cite book|doi=10.1109/RT.2006.280210|title = 2006 IEEE Symposium on Interactive Ray Tracing|pages=15–23|year = 2006|last1 = Benthin|first1 = Carsten|last2=Wald|first2=Ingo|last3=Scherbaum|first3=Michael|last4=Friedrich|first4=Heiko|isbn=978-1-4244-0693-7|citeseerx = 10.1.1.67.8982}}
15. ^{{cite web|title=Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals|url=https://www.teco.edu/~scholz/papers/ScholzDiploma.pdf}}
16. ^{{cite book|doi=10.1109/ccnc08.2007.235|title = 2008 5th IEEE Consumer Communications and Networking Conference|pages=1030–1034|year = 2008|last1 = Kwon|first1 = Bomjun|last2=Choi|first2=Taiho|last3=Chung|first3=Heejin|last4=Kim|first4=Geonho|isbn=978-1-4244-1457-4}}
17. ^{{cite book|doi=10.1007/978-3-540-85451-7_71|title = Euro-Par 2008 – Parallel Processing|volume = 5168|pages = 665–675|series = Lecture Notes in Computer Science|year = 2008|last1 = Duan|first1 = Rubing|last2 = Strey|first2 = Alfred|isbn = 978-3-540-85450-0}}
18. ^{{cite web|title=Improving the performance of video with AVX|url=https://software.intel.com/en-us/articles/improving-the-compute-performance-of-video-processing-software-using-avx-advanced-vector-extensions-instructions|date=2012-02-08}}
19. ^{{cite web|title=microsoft research/pixel shaders/MNIST|url=https://hal.inria.fr/inria-00112631/document}}
20. ^{{cite web|title=how the gpu came to be used for general computation|url=http://igoro.com/archive/how-gpu-came-to-be-used-for-general-computation/}}
21. ^{{cite web|title=imagenet classification with deep convolutional neural networks|url=https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf}}
22. ^{{cite web|title=nvidia driving the development of deep learning|url=http://insidehpc.com/2016/05/nvidia-driving-the-development-of-deep-learning/|date=2016-05-17}}
23. ^{{cite web|title=nvidia introduces supercomputer for self driving cars|url=http://gas2.org/2016/01/06/nvidia-introduces-supercomputer-for-self-driving-cars/|date=2016-01-06}}
24. ^{{cite web|title=how nvlink will enable faster easier multi GPU computing|url=https://devblogs.nvidia.com/parallelforall/how-nvlink-will-enable-faster-easier-multi-gpu-computing/|date=2014-11-14}}
25. ^"[https://www.researchgate.net/publication/329802520_A_Survey_on_Optimized_Implementation_of_Deep_Learning_Models_on_the_NVIDIA_Jetson_Platform A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform]", 2019
26. ^{{Cite web| first = Mark | last = Harris | url = https://devblogs.nvidia.com/parallelforall/cuda-9-features-revealed/ | title = CUDA 9 Features Revealed: Volta, Cooperative Groups and More | date = May 11, 2017 | access-date = August 12, 2017}}
27. ^{{Cite web|url=http://www.nextplatform.com/2016/08/23/fpga-based-deep-learning-accelerators-take-asics/|title=FPGA Based Deep Learning Accelerators Take on ASICs|date=2016-08-23|website=The Next Platform|access-date=2016-09-07}}
28. ^{{cite web|title=microsoft extends fpga reach from bing to deep learning|url=http://www.nextplatform.com/2015/08/27/microsoft-extends-fpga-reach-from-bing-to-deep-learning/|date=2015-08-27}}
29. ^{{cite journal|title=Accelerating Deep Convolutional Neural Networks Using Specialized Hardware|journal=Microsoft Research|url=http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.pdf|date=2015-02-23|last1=Chung|first1=Eric|last2=Strauss|first2=Karin|last3=Fowers|first3=Jeremy|last4=Kim|first4=Joo-Young|last5=Ruwase|first5=Olatunji|last6=Ovtcharov|first6=Kalin}}
30. ^"[https://www.academia.edu/37491583/A_Survey_of_FPGA-based_Accelerators_for_Convolutional_Neural_Networks A Survey of FPGA-based Accelerators for Convolutional Neural Networks]", Mittal et al., NCAA, 2018
31. ^{{Cite web|url=http://techreport.com/news/30155/google-boosts-machine-learning-with-its-tensor-processing-unit|title=Google boosts machine learning with its Tensor Processing Unit|last=|first=|date=2016-05-19|website=|publisher=|access-date=2016-09-13}}
32. ^{{Cite web|url=https://www.sciencedaily.com/releases/2016/02/160203134840.htm|title=Chip could bring deep learning to mobile devices|last=|first=|date=2016-02-03|website=www.sciencedaily.com|publisher=|access-date=2016-09-13}}
33. ^{{cite web|title=Deep Learning with Limited Numerical Precision|url=http://jmlr.org/proceedings/papers/v37/gupta15.pdf}}
34. ^{{cite arXiv|title=XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks|eprint=1603.05279|last1=Rastegari|first1=Mohammad|last2=Ordonez|first2=Vicente|last3=Redmon|first3=Joseph|last4=Farhadi|first4=Ali|class=cs.CV|year=2016}}
35. ^{{Cite web | title = Intel unveils Nervana Neural Net L-1000 for accelerated AI training | author = Khari Johnson | work = VentureBeat | date = 2018-05-23 | accessdate = 2018-05-23 | url = https://venturebeat.com/2018/05/23/intel-unveils-nervana-neural-net-l-1000-for-accelerated-ai-training/ |quote = ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs. }}
36. ^{{Cite web | title = Intel Lays Out New Roadmap for AI Portfolio | author = Michael Feldman | work = TOP500 Supercomputer Sites | date = 2018-05-23 | accessdate = 2018-05-23 | url = https://www.top500.org/news/intel-lays-out-new-roadmap-for-ai-portfolio/ | quote = Intel plans to support this format across all their AI products, including the Xeon and FPGA lines }}
37. ^{{Cite web | title = Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019 | author = Lucian Armasu | work = Tom's Hardware | date = 2018-05-23 | accessdate = 2018-05-23 | url = https://www.tomshardware.com/news/intel-neural-network-processor-lake-crest,37105.html | quote = Intel said that the NNP-L1000 would also support bfloat16, a numerical format that’s being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019. }}
38. ^{{Cite web | title = Available TensorFlow Ops {{!}} Cloud TPU {{!}} Google Cloud | author = | work = Google Cloud | date = | accessdate = 2018-05-23 | url = https://cloud.google.com/tpu/docs/tensorflow-ops | quote = This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU. }}
39. ^{{Cite web | title = Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50 | author = Elmar Haußmann | work = RiseML Blog | date = 2018-04-26 | accessdate = 2018-05-23 | url = https://blog.riseml.com/comparing-google-tpuv2-against-nvidia-v100-on-resnet-50-c2bbb6a51e5e | language = | quote = For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision. }}
40. ^{{Cite web | title = ResNet-50 using BFloat16 on TPU | author = Tensorflow Authors | work = Google | date = 2018-02-28 | accessdate = 2018-05-23 | url = https://github.com/tensorflow/tpu/tree/master/models/experimental/resnet_bfloat16 | quote = }}{{Dead link|date=April 2019 |bot=InternetArchiveBot |fix-attempted=yes }}
41. ^{{cite report |title= TensorFlow Distributions |author= Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous |date= 2017-11-28 |id= Accessed 2018-05-23 |arxiv= 1711.10604 |quote= All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts |bibcode= 2017arXiv171110604D }}
42. ^{{Cite journal|arxiv=1706.00511|author=Abu Sebastian |author2=Tomas Tuma |author3=Nikolaos Papandreou |author4=Manuel Le Gallo |author5=Lukas Kull |author6=Thomas Parnell |author7=Evangelos Eleftheriou|title=Temporal correlation detection using computational phase-change memory|journal=Nature Communications|volume=8 |doi=10.1038/s41467-017-01481-9 |year=2017}}
43. ^{{Cite news|url=https://phys.org/news/2018-10-brain-inspired-architecture-advance-ai.html|title=A new brain-inspired architecture could improve how computers handle data and advance AI|last=|first=|date=2018-10-03|work=American Institute of Physics|access-date=2018-10-05}}
44. ^{{Cite arXiv|eprint=1801.06228|class=cs.ET |author=Carlos Ríos |author2=Nathan Youngblood |author3=Zengguang Cheng |author4=Manuel Le Gallo |author5=Wolfram H.P. Pernice |author6=C David Wright |author7=Abu Sebastian |author8=Harish Bhaskaran|title=In-memory computing on a photonic platform|year=2018 }}
45. ^{{cite web|title=NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256|url=http://www.nvidia.com/object/IO_20020111_5424.html}}
46. ^{{Cite web|url=https://beebom.com/google-announces-edge-tpu-cloud-iot-edge-at-cloud-next-2018/|title=Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018|last=Kundu|first=Kishalaya|date=2018-07-26|website=Beebom|language=en-US|access-date=2019-02-02}}
47. ^{{cite news|last1=Kampman|first1=Jeff|title=Intel unveils purpose-built Neural Network Processor for deep learning|url=https://techreport.com/news/32704/intel-unveils-purpose-built-neural-network-processor-for-deep-learning|accessdate=18 October 2017|publisher=Tech Report|date=17 October 2017}}
48. ^{{cite news|title=Intel Nervana Neural Network Processors (NNP) Redefine AI Silicon |url=https://www.intelnervana.com/intel-nervana-neural-network-processors-nnp-redefine-ai-silicon/?_ga=2.62312428.1380200850.1508486032-2008757629.1504021982|accessdate=20 October 2017}}
49. ^{{cite web|url=https://www.mobileye.com/our-technology/evolution-eyeq-chip/|title=The Evolution of EyeQ}}
50. ^{{cite web|url=http://www.general-vision.com/hardware/nm500/|title=NM500, Neuromorphic chip with 576 neurons|access-date=October 3, 2017|archive-url=https://web.archive.org/web/20171003175039/http://www.general-vision.com/hardware/nm500/|archive-date=October 3, 2017|dead-url=yes|df=mdy-all}}
51. ^{{cite web|url=https://www.forbes.com/sites/tiriasresearch/2017/05/10/nvidia-goes-beyond-the-gpu-for-ai-with-volta/|title=Nvidia goes beyond the GPU for AI with Volta}}
52. ^{{cite web|url=https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/6|title=The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX|publisher=AnandTech}}
53. ^{{cite web|title=nvidia dgx-1|url=https://images.nvidia.com/content/technologies/deep-learning/pdf/61681-DB2-Launch-Datasheet-Deep-Learning-Letter-WEB.pdf}}
54. ^{{Cite web|url=https://www.anandtech.com/show/13584/nvidia-xavier-agx-hands-on-carmel-and-more|title=Investigating NVIDIA's Jetson AGX: A Look at Xavier and Its Carmel Cores|last=Frumusanu|first=Andrei|website=www.anandtech.com|access-date=2019-02-02}}
55. ^{{cite news|last1=Smith|first1=Ryan|title=AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming in 2017|url=http://www.anandtech.com/show/10905/amd-announces-radeon-instinct-deep-learning-2017|accessdate=12 December 2016|publisher=Anandtech|date=12 December 2016}}
56. ^{{Cite web|url=https://developer.qualcomm.com/blog/device-ai-qualcomm-snapdragon-neural-processing-engine-sdk|title=On-Device AI with Qualcomm Snapdragon Neural Processing Engine SDK|website=Qualcomm Developer Network|language=en|access-date=2019-02-02}}
57. ^{{cite web|url=https://linustechtips.com/main/topic/851286-nec-sx-aurora-tsubasa/|title=NEC SX-Aurora TSUBASA}}
58. ^{{cite web|url=https://www.finanznachrichten.de/nachrichten-2018-10/45108841-ai-acceleration-with-nec-s-new-vector-computer-011.htm|title=AI Acceleration-with-NEC's New Vector Computer}}
59. ^{{cite web|url=https://www.cadence.com/content/cadence-www/global/en_US/home/company/newsroom/press-releases/pr/2017/cadence-unveils-industrys-first-neural-network-dsp-ip-for-automo.html|title=Cadence Unveils Industry's First Neural Network DSP IP for Automotive, Surveillance, Drone and Mobile Markets}}
60. ^{{Cite web|url=https://www.anandtech.com/show/12633/cadence-announces-tensilica-vision-q6-dsp|title=Cadence Announces Tensilica Vision Q6 DSP|last=Frumusanu|first=Andrei|website=www.anandtech.com|access-date=2019-02-02}}
61. ^{{Cite web|url=https://www.anandtech.com/show/13377/cadence-announces-tensilica-dna-100-a-bigger-nn-ip|title=Cadence Announces The Tensilica DNA 100 IP: Bigger Artificial Intelligence|last=Frumusanu|first=Andrei|website=www.anandtech.com|access-date=2019-02-02}}
62. ^{{cite web|url=https://www.imgtec.com/powervr-2nx-neural-network-accelerator/|title=The highest performance neural network inference accelerator}}
63. ^{{Cite web|url=https://www.anandtech.com/show/13671/imagination-announces-powervr-series9x-p-gpus|title=Imagination Announces PowerVR Series9XTP, Series9XMP, and Series9XEP GPU Cores|last=Oh|first=Nate|website=www.anandtech.com|access-date=2019-02-02}}
64. ^{{Cite news|url=https://www.theverge.com/2017/9/13/16300464/apple-iphone-x-ai-neural-engine|title=The iPhone X’s new neural engine exemplifies Apple’s approach to AI|work=The Verge|access-date=2017-09-23}}
65. ^{{cite web |url= https://www.samsung.com/semiconductor/minisite/exynos/products/mobileprocessor/exynos-9-series-9820/ |title= Exynos 9 Series (9820) - The Next-level Processor for the Mobile Future |accessdate= 31 March 2019 |deadurl= no }}
66. ^{{Cite web|url=https://www.anandtech.com/show/12815/cambricon-makers-of-huaweis-kirin-npu-ip-build-a-big-ai-chip-and-pcie-card|title=Cambricon, Makers of Huawei's Kirin NPU IP, Build A Big AI Chip and PCIe Card|last=Cutress|first=Ian|website=www.anandtech.com|access-date=2019-02-02}}
67. ^{{cite web|url=http://consumer.huawei.com/en/press/news/2017/ifa2017-kirin970/|title=HUAWEI Reveals the Future of Mobile AI at IFA 2017}}
68. ^{{Cite web|url=https://www.anandtech.com/show/13253/hot-chips-2018-arm-machine-learning-core-live-blog|title=Hot Chips 2018: Arm's Machine Learning Core Live Blog|last=Cutress|first=Ian|website=www.anandtech.com|access-date=2019-02-02}}
69. ^{{cite web|url=https://www.ceva-dsp.com/product/ceva-neupro/|title=A Family of AI Processors for Deep Learning at the Edge}}
70. ^{{cite web |last1=Manjeera Digital System |first1=UMA |title=Universal Multifunction Accelerator |url=http://manmanjeerads.com/technology.htm |website=Manjeera Digital Systems |accessdate=28 June 2018}}
71. ^{{cite web |last1=Manjeera Digital Systems |first1=Universal Multifunction Accelerator |title=Revolutionise Processing |url=http://www.newindianexpress.com/cities/hyderabad/2018/may/11/hyderabad-start-up-looks-to-revolutionise-processor-technology-1813121.html |website=Indian Express |accessdate=28 June 2018}}
72. ^{{cite news |last1=AI Chip |first1=UMA |title=AI Chip from Hyderabad |url=https://telanganatoday.com/first-indian-chip-for-ai-applications-developed-in-hyderabad |accessdate=28 June 2018 |issue=News Paper |publisher=Telangana Today |date=10 May 2018}}
73. ^{{cite web|author=Lambert, Fred|date=December 8, 2017|url=https://electrek.co/2017/12/08/elon-musk-tesla-new-ai-chip-jim-keller/| title=Elon Musk confirms that Tesla is working on its own new AI chip led by Jim Keller}}
74. ^{{cite web|url=http://www.mit.edu/~sze/eyeriss.html|title=Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks|author=Chen, Yu-Hsin |author2=Krishna, Tushar |author3=Emer, Joel |author4=Sze, Vivienne|work=IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers|year=2016|pages=262–263}}
75. ^{{cite web|url=http://www.ien.gatech.edu/news/mixed-signal-processing-powers-bio-mimetic-cmos-chip-enable-neural-learning-autonomous-micro|title=Mixed-signal Processing Powers Bio-mimetic CMOS Chip to Enable Neural Learning in Autonomous Micro-Robots | IEN}}
76. ^{{cite arxiv |title=NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps|author=Aimar, Alessandro|display-authors=et al|eprint=1706.01406|class=cs.CV|year=2017}}
77. ^{{cite web|title=Synthara Technologies|url=https://www.synthara.ch/}}
78. ^{{cite web|title=kalray MPPA|url=http://www.hotchips.org/wp-content/uploads/hc_archives/hc27/HC27.24-Monday-Epub/HC27.24.20-Multimedia-Epub/HC27.24.240-DeDinechin-KalrayMMPA-Kalray-v4.pdf}}
79. ^{{cite web|url=https://www.graphcore.ai/technology|title=Graphcore Technology}}
80. ^{{cite web|url=https://www.nextplatform.com/2017/08/23/first-depth-view-wave-computings-dpu-architecture-systems/|title=Wave Computing's DPU architecture|date=2017-08-23}}
81. ^{{cite web|url=https://reconfigdeeplearning.files.wordpress.com/2017/02/isscc2017-14-1visuals.pdf|title=A 2.9 TOPS/W Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems}}
82. ^{{cite web|title=yann lecun on IBM truenorth|url=https://www.facebook.com/yann.lecun/posts/10152184295832143}}argues that spiking neurons have never produced leading quality results, and that 8-16 bit precision is optimal, pushes the competing 'neuflow' design
83. ^{{cite web|title=IBM cracks open new era of neuromorphic computing|quote=TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches|url=http://www.extremetech.com/extreme/187612-ibm-cracks-open-a-new-era-of-computing-with-brain-like-chip-4096-cores-1-million-neurons-5-4-billion-transistors}}
84. ^{{cite web|url=https://newsroom.intel.com/editorials/intels-new-self-learning-chip-promises-accelerate-artificial-intelligence/|title=Intel's New Self-Learning Chip Promises to Accelerate Artificial Intelligence}}
85. ^{{cite web|url=http://www.brainchipinc.com/products/civil-surveillance-solutions/brainchip-accelerator|title=BrainChip Accelerator|access-date=October 3, 2017|archive-url=https://web.archive.org/web/20171003174616/http://www.brainchipinc.com/products/civil-surveillance-solutions/brainchip-accelerator|archive-date=October 3, 2017|dead-url=yes|df=mdy-all}}
86. ^{{cite web|title=India preps RISC-V Processors - Shakti targets servers, IoT, analytics|url=http://www.eetimes.com/document.asp?doc_id=1328790&page_number=2|quote=The Shakti project now includes plans for at least six microprocessor designs as well as associated fabrics and an accelerator chip}}
87. ^"[https://www.academia.edu/36504841/A_Survey_of_ReRAM-based_Architectures_for_Processing-in-memory_and_Neural_Networks A Survey of ReRAM-based Architectures for Processing-in-memory and Neural Networks]", S. Mittal, Machine Learning and Knowledge Extraction, 2018
88. ^{{cite web|url=http://www.alphaics.ai/|title=AlphaICs}}
89. ^{{cite web|title=drive px|url=http://www.nvidia.com/object/drive-px.html}}
90. ^{{cite web|title=design of a machine vision system for weed control|url=http://abe.ufl.edu/wlee/publications/icame96.pdf|access-date=June 17, 2016|archive-url=https://web.archive.org/web/20100623062608/http://www.abe.ufl.edu/wlee/Publications/ICAME96.pdf|archive-date=June 23, 2010|dead-url=yes|df=mdy-all}}
91. ^{{cite web|title=qualcomm research brings server class machine learning to every data devices|url=https://www.qualcomm.com/news/onq/2015/10/01/qualcomm-research-brings-server-class-machine-learning-everyday-devices-making|date=October 2015}}
92. ^{{cite web|title=movidius powers worlds most intelligent drone|url=https://www.siliconrepublic.com/machines/movidius-dji-drone|date=2016-03-16}}

External links

  • http://www.nextplatform.com/2016/04/05/nvidia-puts-accelerator-metal-pascal/
  • http://eyeriss.mit.edu
  • http://www.alphaics.ai/
{{Hardware acceleration}}

5 : Application-specific integrated circuits|AI accelerators|Coprocessors|Computer optimization|Gate arrays

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/22 5:20:16