Artificial intelligence hardware design: challenges and solutions
ARTIFICIAL INTELLIGENCE HARDWARE DESIGN Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field In Artificial Intelligence Hardware Design: Challenges and Solutions , distinguished researchers and authors Drs. Albert Chun Chen...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Hoboken, New Jersey
Wiley
[2021]
|
Schlagworte: | |
Online-Zugang: | FHD01 |
Zusammenfassung: | ARTIFICIAL INTELLIGENCE HARDWARE DESIGN Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field In Artificial Intelligence Hardware Design: Challenges and Solutions , distinguished researchers and authors Drs. Albert Chun Chen Liu and Oscar Ming Kin Law deliver a rigorous and practical treatment of the design applications of specific circuits and systems for accelerating neural network processing. Beginning with a discussion and explanation of neural networks and their developmental history, the book goes on to describe parallel architectures, streaming graphs for massive parallel computation, and convolution optimization. The authors offer readers an illustration of in-memory computation through Georgia Tech’s Neurocube and Stanford’s Tetris accelerator using the Hybrid Memory Cube, as well as near-memory architecture through the embedded eDRAM of the Institute of Computing Technology, the Chinese Academy of Science, and other institutions. Readers will also find a discussion of 3D neural processing techniques to support multiple layer neural networks, as well as information like: A thorough introduction to neural networks and neural network development history, as well as Convolutional Neural Network (CNN) models Explorations of various parallel architectures, including the Intel CPU, Nvidia GPU, Google TPU, and Microsoft NPU, emphasizing hardware and software integration for performance improvement Discussions of streaming graph for massive parallel computation with the Blaize GSP and Graphcore IPU An examination of how to optimize convolution with UCLA Deep Convolutional Neural Network accelerator filter decomposition Perfect for hardware and software engineers and firmware developers, Artificial Intelligence Hardware Design is an indispensable resource for anyone working with Neural Processing Units in either a hardware or software capacity |
Beschreibung: | 1 Online-Ressource (xxii, 208 Seiten) |
ISBN: | 9781119810469 |
Internformat
MARC
LEADER | 00000nmm a2200000 c 4500 | ||
---|---|---|---|
001 | BV047496535 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | cr|uuu---uuuuu | ||
008 | 211005s2021 |||| o||u| ||||||eng d | ||
020 | |a 9781119810469 |9 978-1-119-81046-9 | ||
035 | |a (OCoLC)1277023070 | ||
035 | |a (DE-599)BVBBV047496535 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-1050 | ||
100 | 1 | |a Liu, Albert (Chun-Chen) |e Verfasser |4 aut | |
245 | 1 | 0 | |a Artificial intelligence hardware design |b challenges and solutions |c Albert Chun Chen Liu and Oscar Ming Kin Law |
264 | 1 | |a Hoboken, New Jersey |b Wiley |c [2021] | |
300 | |a 1 Online-Ressource (xxii, 208 Seiten) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
505 | 8 | |a Front Matter -- Introduction -- Deep Learning -- Parallel Architecture -- Streaming Graph Theory -- Convolution Optimization -- In-Memory Computation -- Near-Memory Architecture -- Network Sparsity -- 3D Neural Processing -- Appendix A: Neural Network Topology -- Index | |
520 | |a ARTIFICIAL INTELLIGENCE HARDWARE DESIGN Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field In Artificial Intelligence Hardware Design: Challenges and Solutions , distinguished researchers and authors Drs. Albert Chun Chen Liu and Oscar Ming Kin Law deliver a rigorous and practical treatment of the design applications of specific circuits and systems for accelerating neural network processing. Beginning with a discussion and explanation of neural networks and their developmental history, the book goes on to describe parallel architectures, streaming graphs for massive parallel computation, and convolution optimization. The authors offer readers an illustration of in-memory computation through Georgia Tech’s Neurocube and Stanford’s Tetris accelerator using the Hybrid Memory Cube, as well as near-memory architecture through the embedded eDRAM of the Institute of Computing Technology, the Chinese Academy of Science, and other institutions. Readers will also find a discussion of 3D neural processing techniques to support multiple layer neural networks, as well as information like: A thorough introduction to neural networks and neural network development history, as well as Convolutional Neural Network (CNN) models Explorations of various parallel architectures, including the Intel CPU, Nvidia GPU, Google TPU, and Microsoft NPU, emphasizing hardware and software integration for performance improvement Discussions of streaming graph for massive parallel computation with the Blaize GSP and Graphcore IPU An examination of how to optimize convolution with UCLA Deep Convolutional Neural Network accelerator filter decomposition Perfect for hardware and software engineers and firmware developers, Artificial Intelligence Hardware Design is an indispensable resource for anyone working with Neural Processing Units in either a hardware or software capacity | ||
650 | 4 | |a Neural networks (Computer science) | |
650 | 4 | |a Artificial intelligence | |
650 | 4 | |a Computer engineering | |
700 | 1 | |a Law, Oscar Ming Kin |e Verfasser |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 978-1-119-81045-2 |
912 | |a ZDB-30-PQE | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-032897687 | ||
966 | e | |u https://ebookcentral.proquest.com/lib/th-deggendorf/detail.action?docID=6712453 |l FHD01 |p ZDB-30-PQE |q FHD01_PQE_Kauf |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1804182825696493568 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Liu, Albert (Chun-Chen) Law, Oscar Ming Kin |
author_facet | Liu, Albert (Chun-Chen) Law, Oscar Ming Kin |
author_role | aut aut |
author_sort | Liu, Albert (Chun-Chen) |
author_variant | a c c l acc accl o m k l omk omkl |
building | Verbundindex |
bvnumber | BV047496535 |
collection | ZDB-30-PQE |
contents | Front Matter -- Introduction -- Deep Learning -- Parallel Architecture -- Streaming Graph Theory -- Convolution Optimization -- In-Memory Computation -- Near-Memory Architecture -- Network Sparsity -- 3D Neural Processing -- Appendix A: Neural Network Topology -- Index |
ctrlnum | (OCoLC)1277023070 (DE-599)BVBBV047496535 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03496nmm a2200361 c 4500</leader><controlfield tag="001">BV047496535</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">211005s2021 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781119810469</subfield><subfield code="9">978-1-119-81046-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1277023070</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV047496535</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-1050</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Albert (Chun-Chen)</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Artificial intelligence hardware design</subfield><subfield code="b">challenges and solutions</subfield><subfield code="c">Albert Chun Chen Liu and Oscar Ming Kin Law</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Hoboken, New Jersey</subfield><subfield code="b">Wiley</subfield><subfield code="c">[2021]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (xxii, 208 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Front Matter -- Introduction -- Deep Learning -- Parallel Architecture -- Streaming Graph Theory -- Convolution Optimization -- In-Memory Computation -- Near-Memory Architecture -- Network Sparsity -- 3D Neural Processing -- Appendix A: Neural Network Topology -- Index</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">ARTIFICIAL INTELLIGENCE HARDWARE DESIGN Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field In Artificial Intelligence Hardware Design: Challenges and Solutions , distinguished researchers and authors Drs. Albert Chun Chen Liu and Oscar Ming Kin Law deliver a rigorous and practical treatment of the design applications of specific circuits and systems for accelerating neural network processing. Beginning with a discussion and explanation of neural networks and their developmental history, the book goes on to describe parallel architectures, streaming graphs for massive parallel computation, and convolution optimization. The authors offer readers an illustration of in-memory computation through Georgia Tech’s Neurocube and Stanford’s Tetris accelerator using the Hybrid Memory Cube, as well as near-memory architecture through the embedded eDRAM of the Institute of Computing Technology, the Chinese Academy of Science, and other institutions. Readers will also find a discussion of 3D neural processing techniques to support multiple layer neural networks, as well as information like: A thorough introduction to neural networks and neural network development history, as well as Convolutional Neural Network (CNN) models Explorations of various parallel architectures, including the Intel CPU, Nvidia GPU, Google TPU, and Microsoft NPU, emphasizing hardware and software integration for performance improvement Discussions of streaming graph for massive parallel computation with the Blaize GSP and Graphcore IPU An examination of how to optimize convolution with UCLA Deep Convolutional Neural Network accelerator filter decomposition Perfect for hardware and software engineers and firmware developers, Artificial Intelligence Hardware Design is an indispensable resource for anyone working with Neural Processing Units in either a hardware or software capacity</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer engineering</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Law, Oscar Ming Kin</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">978-1-119-81045-2</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032897687</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/th-deggendorf/detail.action?docID=6712453</subfield><subfield code="l">FHD01</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">FHD01_PQE_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV047496535 |
illustrated | Not Illustrated |
index_date | 2024-07-03T18:17:23Z |
indexdate | 2024-07-10T09:13:42Z |
institution | BVB |
isbn | 9781119810469 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032897687 |
oclc_num | 1277023070 |
open_access_boolean | |
owner | DE-1050 |
owner_facet | DE-1050 |
physical | 1 Online-Ressource (xxii, 208 Seiten) |
psigel | ZDB-30-PQE ZDB-30-PQE FHD01_PQE_Kauf |
publishDate | 2021 |
publishDateSearch | 2021 |
publishDateSort | 2021 |
publisher | Wiley |
record_format | marc |
spelling | Liu, Albert (Chun-Chen) Verfasser aut Artificial intelligence hardware design challenges and solutions Albert Chun Chen Liu and Oscar Ming Kin Law Hoboken, New Jersey Wiley [2021] 1 Online-Ressource (xxii, 208 Seiten) txt rdacontent c rdamedia cr rdacarrier Front Matter -- Introduction -- Deep Learning -- Parallel Architecture -- Streaming Graph Theory -- Convolution Optimization -- In-Memory Computation -- Near-Memory Architecture -- Network Sparsity -- 3D Neural Processing -- Appendix A: Neural Network Topology -- Index ARTIFICIAL INTELLIGENCE HARDWARE DESIGN Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field In Artificial Intelligence Hardware Design: Challenges and Solutions , distinguished researchers and authors Drs. Albert Chun Chen Liu and Oscar Ming Kin Law deliver a rigorous and practical treatment of the design applications of specific circuits and systems for accelerating neural network processing. Beginning with a discussion and explanation of neural networks and their developmental history, the book goes on to describe parallel architectures, streaming graphs for massive parallel computation, and convolution optimization. The authors offer readers an illustration of in-memory computation through Georgia Tech’s Neurocube and Stanford’s Tetris accelerator using the Hybrid Memory Cube, as well as near-memory architecture through the embedded eDRAM of the Institute of Computing Technology, the Chinese Academy of Science, and other institutions. Readers will also find a discussion of 3D neural processing techniques to support multiple layer neural networks, as well as information like: A thorough introduction to neural networks and neural network development history, as well as Convolutional Neural Network (CNN) models Explorations of various parallel architectures, including the Intel CPU, Nvidia GPU, Google TPU, and Microsoft NPU, emphasizing hardware and software integration for performance improvement Discussions of streaming graph for massive parallel computation with the Blaize GSP and Graphcore IPU An examination of how to optimize convolution with UCLA Deep Convolutional Neural Network accelerator filter decomposition Perfect for hardware and software engineers and firmware developers, Artificial Intelligence Hardware Design is an indispensable resource for anyone working with Neural Processing Units in either a hardware or software capacity Neural networks (Computer science) Artificial intelligence Computer engineering Law, Oscar Ming Kin Verfasser aut Erscheint auch als Druck-Ausgabe 978-1-119-81045-2 |
spellingShingle | Liu, Albert (Chun-Chen) Law, Oscar Ming Kin Artificial intelligence hardware design challenges and solutions Front Matter -- Introduction -- Deep Learning -- Parallel Architecture -- Streaming Graph Theory -- Convolution Optimization -- In-Memory Computation -- Near-Memory Architecture -- Network Sparsity -- 3D Neural Processing -- Appendix A: Neural Network Topology -- Index Neural networks (Computer science) Artificial intelligence Computer engineering |
title | Artificial intelligence hardware design challenges and solutions |
title_auth | Artificial intelligence hardware design challenges and solutions |
title_exact_search | Artificial intelligence hardware design challenges and solutions |
title_exact_search_txtP | Artificial intelligence hardware design challenges and solutions |
title_full | Artificial intelligence hardware design challenges and solutions Albert Chun Chen Liu and Oscar Ming Kin Law |
title_fullStr | Artificial intelligence hardware design challenges and solutions Albert Chun Chen Liu and Oscar Ming Kin Law |
title_full_unstemmed | Artificial intelligence hardware design challenges and solutions Albert Chun Chen Liu and Oscar Ming Kin Law |
title_short | Artificial intelligence hardware design |
title_sort | artificial intelligence hardware design challenges and solutions |
title_sub | challenges and solutions |
topic | Neural networks (Computer science) Artificial intelligence Computer engineering |
topic_facet | Neural networks (Computer science) Artificial intelligence Computer engineering |
work_keys_str_mv | AT liualbertchunchen artificialintelligencehardwaredesignchallengesandsolutions AT lawoscarmingkin artificialintelligencehardwaredesignchallengesandsolutions |