Efficient processing of deep neural networks:
Intro -- Preface -- Acknowledgments -- Understanding Deep Neural Networks -- Introduction -- Background on Deep Neural Networks -- Artificial Intelligence and Deep Neural Networks -- Neural Networks and Deep Neural Networks -- Training versus Inference -- Development History -- Applications of DNNs...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
[San Rafael]
Morgan & Claypool Publishers
[2020]
|
Schriftenreihe: | Synthesis lectures on computer architecture
#50 |
Schlagworte: | |
Online-Zugang: | TUM01 |
Zusammenfassung: | Intro -- Preface -- Acknowledgments -- Understanding Deep Neural Networks -- Introduction -- Background on Deep Neural Networks -- Artificial Intelligence and Deep Neural Networks -- Neural Networks and Deep Neural Networks -- Training versus Inference -- Development History -- Applications of DNNs -- Embedded versus Cloud -- Overview of Deep Neural Networks -- Attributes of Connections Within a Layer -- Attributes of Connections Between Layers -- Popular Types of Layers in DNNs -- CONV Layer (Convolutional) -- FC Layer (Fully Connected) -- Nonlinearity -- Pooling and Unpooling -- Normalization -- Compound Layers -- Convolutional Neural Networks (CNNs) -- Popular CNN Models -- Other DNNs -- DNN Development Resources -- Frameworks -- Models -- Popular Datasets for Classification -- Datasets for Other Tasks -- Summary -- Design of Hardware for Processing DNNs -- Key Metrics and Design Objectives -- Accuracy -- Throughput and Latency -- Energy Efficiency and Power Consumption -- Hardware Cost -- Flexibility -- Scalability -- Interplay Between Different Metrics -- Kernel Computation -- Matrix Multiplication with Toeplitz -- Tiling for Optimizing Performance -- Computation Transform Optimizations -- Gauss' Complex Multiplication Transform -- Strassen's Matrix Multiplication Transform -- Winograd Transform -- Fast Fourier Transform -- Selecting a Transform -- Summary -- Designing DNN Accelerators -- Evaluation Metrics and Design Objectives -- Key Properties of DNN to Leverage -- DNN Hardware Design Considerations -- Architectural Techniques for Exploiting Data Reuse -- Temporal Reuse -- Spatial Reuse -- Techniques to Reduce Reuse Distance -- Dataflows and Loop Nests -- Dataflow Taxonomy -- Weight Stationary (WS) -- Output Stationary (OS) -- Input Stationary (IS) -- Row Stationary (RS) -- Other Dataflows -- Dataflows for Cross-Layer Processing |
Beschreibung: | Description based on publisher supplied metadata and other sources |
Beschreibung: | 1 Online-Ressource |
ISBN: | 9781681738321 |
Internformat
MARC
LEADER | 00000nmm a22000001cb4500 | ||
---|---|---|---|
001 | BV047026574 | ||
003 | DE-604 | ||
005 | 20201204 | ||
007 | cr|uuu---uuuuu | ||
008 | 201124s2020 |||| o||u| ||||||eng d | ||
020 | |a 9781681738321 |9 978-1-68173-832-1 | ||
035 | |a (OCoLC)1225882628 | ||
035 | |a (DE-599)KEP054359155 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-91G | ||
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
084 | |a DAT 708 |2 stub | ||
100 | 1 | |a Sze, Vivienne |e Verfasser |0 (DE-588)1062747607 |4 aut | |
245 | 1 | 0 | |a Efficient processing of deep neural networks |c Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
264 | 1 | |a [San Rafael] |b Morgan & Claypool Publishers |c [2020] | |
300 | |a 1 Online-Ressource | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 1 | |a Synthesis lectures on computer architecture |v #50 | |
500 | |a Description based on publisher supplied metadata and other sources | ||
520 | 3 | |a Intro -- Preface -- Acknowledgments -- Understanding Deep Neural Networks -- Introduction -- Background on Deep Neural Networks -- Artificial Intelligence and Deep Neural Networks -- Neural Networks and Deep Neural Networks -- Training versus Inference -- Development History -- Applications of DNNs -- Embedded versus Cloud -- Overview of Deep Neural Networks -- Attributes of Connections Within a Layer -- Attributes of Connections Between Layers -- Popular Types of Layers in DNNs -- CONV Layer (Convolutional) -- FC Layer (Fully Connected) -- Nonlinearity -- Pooling and Unpooling -- Normalization -- Compound Layers -- Convolutional Neural Networks (CNNs) -- Popular CNN Models -- Other DNNs -- DNN Development Resources -- Frameworks -- Models -- Popular Datasets for Classification -- Datasets for Other Tasks -- Summary -- Design of Hardware for Processing DNNs -- Key Metrics and Design Objectives -- Accuracy -- Throughput and Latency -- Energy Efficiency and Power Consumption -- Hardware Cost -- Flexibility -- Scalability -- Interplay Between Different Metrics -- Kernel Computation -- Matrix Multiplication with Toeplitz -- Tiling for Optimizing Performance -- Computation Transform Optimizations -- Gauss' Complex Multiplication Transform -- Strassen's Matrix Multiplication Transform -- Winograd Transform -- Fast Fourier Transform -- Selecting a Transform -- Summary -- Designing DNN Accelerators -- Evaluation Metrics and Design Objectives -- Key Properties of DNN to Leverage -- DNN Hardware Design Considerations -- Architectural Techniques for Exploiting Data Reuse -- Temporal Reuse -- Spatial Reuse -- Techniques to Reduce Reuse Distance -- Dataflows and Loop Nests -- Dataflow Taxonomy -- Weight Stationary (WS) -- Output Stationary (OS) -- Input Stationary (IS) -- Row Stationary (RS) -- Other Dataflows -- Dataflows for Cross-Layer Processing | |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Lernendes System |0 (DE-588)4120666-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Deep learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
653 | 0 | |a Electronic books | |
689 | 0 | 0 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 1 | |a Deep learning |0 (DE-588)1135597375 |D s |
689 | 0 | 2 | |a Lernendes System |0 (DE-588)4120666-6 |D s |
689 | 0 | 3 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Chen, Yu-Hsin |e Verfasser |4 aut | |
700 | 1 | |a Yang, Tien-Ju |e Verfasser |4 aut | |
700 | 1 | |a Emer, Joel S. |e Verfasser |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe, Paperback |z 978-1-68173-831-4 |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe, Hardcover |z 978-1-68173-833-8 |
830 | 0 | |a Synthesis lectures on computer architecture |v #50 |w (DE-604)BV047042546 |9 50 | |
912 | |a ZDB-30-PQE | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-032433936 | ||
966 | e | |u https://ebookcentral.proquest.com/lib/munchentech/detail.action?docID=6242895 |l TUM01 |p ZDB-30-PQE |q TUM_Einzelkauf |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1804181996011782145 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Sze, Vivienne Chen, Yu-Hsin Yang, Tien-Ju Emer, Joel S. |
author_GND | (DE-588)1062747607 |
author_facet | Sze, Vivienne Chen, Yu-Hsin Yang, Tien-Ju Emer, Joel S. |
author_role | aut aut aut aut |
author_sort | Sze, Vivienne |
author_variant | v s vs y h c yhc t j y tjy j s e js jse |
building | Verbundindex |
bvnumber | BV047026574 |
classification_rvk | ST 301 |
classification_tum | DAT 708 |
collection | ZDB-30-PQE |
ctrlnum | (OCoLC)1225882628 (DE-599)KEP054359155 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>04125nmm a22005291cb4500</leader><controlfield tag="001">BV047026574</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20201204 </controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">201124s2020 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781681738321</subfield><subfield code="9">978-1-68173-832-1</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1225882628</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP054359155</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91G</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sze, Vivienne</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1062747607</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Efficient processing of deep neural networks</subfield><subfield code="c">Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research)</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[San Rafael]</subfield><subfield code="b">Morgan & Claypool Publishers</subfield><subfield code="c">[2020]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Synthesis lectures on computer architecture</subfield><subfield code="v">#50</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Intro -- Preface -- Acknowledgments -- Understanding Deep Neural Networks -- Introduction -- Background on Deep Neural Networks -- Artificial Intelligence and Deep Neural Networks -- Neural Networks and Deep Neural Networks -- Training versus Inference -- Development History -- Applications of DNNs -- Embedded versus Cloud -- Overview of Deep Neural Networks -- Attributes of Connections Within a Layer -- Attributes of Connections Between Layers -- Popular Types of Layers in DNNs -- CONV Layer (Convolutional) -- FC Layer (Fully Connected) -- Nonlinearity -- Pooling and Unpooling -- Normalization -- Compound Layers -- Convolutional Neural Networks (CNNs) -- Popular CNN Models -- Other DNNs -- DNN Development Resources -- Frameworks -- Models -- Popular Datasets for Classification -- Datasets for Other Tasks -- Summary -- Design of Hardware for Processing DNNs -- Key Metrics and Design Objectives -- Accuracy -- Throughput and Latency -- Energy Efficiency and Power Consumption -- Hardware Cost -- Flexibility -- Scalability -- Interplay Between Different Metrics -- Kernel Computation -- Matrix Multiplication with Toeplitz -- Tiling for Optimizing Performance -- Computation Transform Optimizations -- Gauss' Complex Multiplication Transform -- Strassen's Matrix Multiplication Transform -- Winograd Transform -- Fast Fourier Transform -- Selecting a Transform -- Summary -- Designing DNN Accelerators -- Evaluation Metrics and Design Objectives -- Key Properties of DNN to Leverage -- DNN Hardware Design Considerations -- Architectural Techniques for Exploiting Data Reuse -- Temporal Reuse -- Spatial Reuse -- Techniques to Reduce Reuse Distance -- Dataflows and Loop Nests -- Dataflow Taxonomy -- Weight Stationary (WS) -- Output Stationary (OS) -- Input Stationary (IS) -- Row Stationary (RS) -- Other Dataflows -- Dataflows for Cross-Layer Processing</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electronic books</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Yu-Hsin</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Tien-Ju</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Emer, Joel S.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe, Paperback</subfield><subfield code="z">978-1-68173-831-4</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe, Hardcover</subfield><subfield code="z">978-1-68173-833-8</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Synthesis lectures on computer architecture</subfield><subfield code="v">#50</subfield><subfield code="w">(DE-604)BV047042546</subfield><subfield code="9">50</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032433936</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/munchentech/detail.action?docID=6242895</subfield><subfield code="l">TUM01</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">TUM_Einzelkauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV047026574 |
illustrated | Not Illustrated |
index_date | 2024-07-03T16:01:11Z |
indexdate | 2024-07-10T09:00:31Z |
institution | BVB |
isbn | 9781681738321 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032433936 |
oclc_num | 1225882628 |
open_access_boolean | |
owner | DE-91G DE-BY-TUM |
owner_facet | DE-91G DE-BY-TUM |
physical | 1 Online-Ressource |
psigel | ZDB-30-PQE ZDB-30-PQE TUM_Einzelkauf |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | Morgan & Claypool Publishers |
record_format | marc |
series | Synthesis lectures on computer architecture |
series2 | Synthesis lectures on computer architecture |
spelling | Sze, Vivienne Verfasser (DE-588)1062747607 aut Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) [San Rafael] Morgan & Claypool Publishers [2020] 1 Online-Ressource txt rdacontent c rdamedia cr rdacarrier Synthesis lectures on computer architecture #50 Description based on publisher supplied metadata and other sources Intro -- Preface -- Acknowledgments -- Understanding Deep Neural Networks -- Introduction -- Background on Deep Neural Networks -- Artificial Intelligence and Deep Neural Networks -- Neural Networks and Deep Neural Networks -- Training versus Inference -- Development History -- Applications of DNNs -- Embedded versus Cloud -- Overview of Deep Neural Networks -- Attributes of Connections Within a Layer -- Attributes of Connections Between Layers -- Popular Types of Layers in DNNs -- CONV Layer (Convolutional) -- FC Layer (Fully Connected) -- Nonlinearity -- Pooling and Unpooling -- Normalization -- Compound Layers -- Convolutional Neural Networks (CNNs) -- Popular CNN Models -- Other DNNs -- DNN Development Resources -- Frameworks -- Models -- Popular Datasets for Classification -- Datasets for Other Tasks -- Summary -- Design of Hardware for Processing DNNs -- Key Metrics and Design Objectives -- Accuracy -- Throughput and Latency -- Energy Efficiency and Power Consumption -- Hardware Cost -- Flexibility -- Scalability -- Interplay Between Different Metrics -- Kernel Computation -- Matrix Multiplication with Toeplitz -- Tiling for Optimizing Performance -- Computation Transform Optimizations -- Gauss' Complex Multiplication Transform -- Strassen's Matrix Multiplication Transform -- Winograd Transform -- Fast Fourier Transform -- Selecting a Transform -- Summary -- Designing DNN Accelerators -- Evaluation Metrics and Design Objectives -- Key Properties of DNN to Leverage -- DNN Hardware Design Considerations -- Architectural Techniques for Exploiting Data Reuse -- Temporal Reuse -- Spatial Reuse -- Techniques to Reduce Reuse Distance -- Dataflows and Loop Nests -- Dataflow Taxonomy -- Weight Stationary (WS) -- Output Stationary (OS) -- Input Stationary (IS) -- Row Stationary (RS) -- Other Dataflows -- Dataflows for Cross-Layer Processing Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Lernendes System (DE-588)4120666-6 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Deep learning (DE-588)1135597375 gnd rswk-swf Electronic books Neuronales Netz (DE-588)4226127-2 s Deep learning (DE-588)1135597375 s Lernendes System (DE-588)4120666-6 s Maschinelles Lernen (DE-588)4193754-5 s DE-604 Chen, Yu-Hsin Verfasser aut Yang, Tien-Ju Verfasser aut Emer, Joel S. Verfasser aut Erscheint auch als Druck-Ausgabe, Paperback 978-1-68173-831-4 Erscheint auch als Druck-Ausgabe, Hardcover 978-1-68173-833-8 Synthesis lectures on computer architecture #50 (DE-604)BV047042546 50 |
spellingShingle | Sze, Vivienne Chen, Yu-Hsin Yang, Tien-Ju Emer, Joel S. Efficient processing of deep neural networks Synthesis lectures on computer architecture Neuronales Netz (DE-588)4226127-2 gnd Lernendes System (DE-588)4120666-6 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Deep learning (DE-588)1135597375 gnd |
subject_GND | (DE-588)4226127-2 (DE-588)4120666-6 (DE-588)4193754-5 (DE-588)1135597375 |
title | Efficient processing of deep neural networks |
title_auth | Efficient processing of deep neural networks |
title_exact_search | Efficient processing of deep neural networks |
title_exact_search_txtP | Efficient processing of deep neural networks |
title_full | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_fullStr | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_full_unstemmed | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_short | Efficient processing of deep neural networks |
title_sort | efficient processing of deep neural networks |
topic | Neuronales Netz (DE-588)4226127-2 gnd Lernendes System (DE-588)4120666-6 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Deep learning (DE-588)1135597375 gnd |
topic_facet | Neuronales Netz Lernendes System Maschinelles Lernen Deep learning |
volume_link | (DE-604)BV047042546 |
work_keys_str_mv | AT szevivienne efficientprocessingofdeepneuralnetworks AT chenyuhsin efficientprocessingofdeepneuralnetworks AT yangtienju efficientprocessingofdeepneuralnetworks AT emerjoels efficientprocessingofdeepneuralnetworks |