Efficient processing of deep neural networks:
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
San Rafael
Morgan & Claypool Publishers
[2020]
|
Schriftenreihe: | Synthesis lectures on computer architecture
50 |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | Literaturverzeichnis Seite 283-216 |
Beschreibung: | xxi, 319 Seiten Illustrationen, Diagramme |
ISBN: | 9781681738314 |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV046957178 | ||
003 | DE-604 | ||
005 | 20211007 | ||
007 | t | ||
008 | 201023s2020 a||| |||| 00||| eng d | ||
020 | |a 9781681738314 |9 9781681738314 | ||
035 | |a (OCoLC)1224011472 | ||
035 | |a (DE-599)OBVAC15635508 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-83 |a DE-739 |a DE-20 | ||
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
100 | 1 | |a Sze, Vivienne |e Verfasser |0 (DE-588)1062747607 |4 aut | |
245 | 1 | 0 | |a Efficient processing of deep neural networks |c Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
264 | 1 | |a San Rafael |b Morgan & Claypool Publishers |c [2020] | |
264 | 4 | |c © 2020 | |
300 | |a xxi, 319 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a Synthesis lectures on computer architecture |v 50 | |
500 | |a Literaturverzeichnis Seite 283-216 | ||
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Deep learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Lernendes System |0 (DE-588)4120666-6 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 1 | |a Deep learning |0 (DE-588)1135597375 |D s |
689 | 0 | 2 | |a Lernendes System |0 (DE-588)4120666-6 |D s |
689 | 0 | 3 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Chen, Yuhsin |e Verfasser |0 (DE-588)1173171770 |4 aut | |
700 | 1 | |a Yang, Tien-Ju |e Verfasser |4 aut | |
700 | 1 | |a Elmer, Joel S. |e Verfasser |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 9781681738321 |
830 | 0 | |a Synthesis lectures on computer architecture |v 50 |w (DE-604)BV023068349 |9 50 | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032365598&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-032365598 |
Datensatz im Suchindex
_version_ | 1804181873414373376 |
---|---|
adam_text | Contents Preface............................................................................................................................ xvii Acknowledgments ........................................................................................................xxi PART I 1 Introduction......................................................................................................................3 1.1 1.2 1.3 1.4 1.5 2 UnderstandingDeep Neural Networks................ 1 Background on Deep Neural Networks..................................................................3 1.1.1 Artificial Intelligence and Deep Neural Networks................................... 3 1.1.2 Neural Networks and Deep Neural Networks......................................... 6 Training versus Inference.........................................................................................8 Development History............................................................................................. 11 Applications of DNNs........................................................................................... 13 Embedded versus Cloud .......................................................................................14 Overview of Deep Neural Networks...........................................................................17 2.1 2.2 2.3 2.4 2.5 2.6 Attributes of Connections Within a Layer.......................................................... 17 Attributes of Connections Between Layers..........................................................19 Popular Types of Layers in
DNNs........................................................................ 19 2.3.1 CONV Layer (Convolutional)................................................................ 19 2.3.2 FC Layer (Fully Connected) ..................................................................23 2.3.3 Nonlinearity .............................................................................................23 2.3.4 Pooling and UnpooLing............................................................................ 24 2.3.5 Normalization...........................................................................................25 2.3.6 Compound Layers.................................................................................... 26 Convolutional Neural Networks (CNNs)............................................................26 2.4.1 Popular CNN Models.............................................................................. 27 Other DNNs ......................................................................................................... 34 DNN Development Resources ............................................................................ 36 2.6.1 Frameworks...............................................................................................36
2.6.2 2.6.3 2.6.4 2.6.5 PARTII Models........................................................................................................ 7 Popular Datasets forClassification........................................................... 37 Datasets for Other Tasks .........................................................................3H Summary................................................................................................... 3li Design of Hardware for Processing DNNs .. . 41 Key Metrics and Design Objectives...........................................................................43 3.1 3.2 3.3 3.4 Accuracy..................................................................................................................43 Throughput and Latency.......................................................................................44 Energy Efficiency and Power Consumption........................................................ 50 Hardware Cost....................................................................................................... 54 3.5 3.6 3.7 Flexibility................................................................................................................55 Scalability................................................................................................................5n Interplay Between Different Metrics.....................................................................57 Kernel Computation .....................................................................................................59 4.1 4.2 Matrix
Multiplication with Toeplitz.................................................................... 61 Tiling for Optimizing Performance.......................................................................61 4.3 Computation Transform Optimizations.............................................................. 67 4.3.1 Gauss’ Complex Multiplication Transform............................................67 4.3.2 Strassens Matrix Multiplication Transform............................................65 4.3.3 Winograd Transform............................................................................... 65 4.3.4 Fast Fourier Transform.............................................................................70 4.3.5 Selecting a Transform...............................................................................71 Summary ................................................................................................................72 4.4 Designing DNN Accelerators......................................................................................73 5.1 Evaluation Metrics and Design Objectives........................................................ 74 5.2 5.3 Key Properties of DNN to Leverage.................................................................... 75 DNN Hardware Design Considerations............................................................ 77 5.4 Architectural Techniques for Exploiting Data Reuse 5.4.1 Temporal Reuse ........................................... 5.4.2 Spatial Reuse................................................. HI Techniques to Reduce Reuse
Distance ..................... 54 5.5
xiii 5.6 Dataflows and Loop Nests.....................................................................................87 5.7 Dataflow Taxonomy...............................................................................................92 5.8 5.9 5.7.1 Weight Stationary (WS).......................................................................... 93 5.7.2 Output Stationary (OS).......................................................................... 95 5.7.3 Input Stationary (IS)................................................................................ 96 5.7.4 Row Stationary (RS)................................................................................ 99 5.7.5 Other Dataflows.....................................................................................105 5.7.6 Dataflows for Cross-Layer Processing..................................................106 DNN Accelerator Buffer Management Strategies............................................107 5.8.1 Implicit versus Explicit Orchestration....................................................107 5.8.2 Coupled versus Decoupled Orchestration..............................................109 5.8.3 Explicit Decoupled Data Orchestration (EDDO) ............................. 110 Flexible NoC Design for DNN Accelerators....................................................113 5.9.1 5.10 6 Flexible Hierarchical Mesh Network....................................................116 Summary ............................................................................................................. 118 Operation Mapping on
Specialized Hardware .................................................119 6.1 Mapping and Loop Nests ...................................................................................120 6.2 Mappers and Compilers.......................................................................................123 6.3 Alapper Organization...........................................................................................125 6.4 6.5 6.6 6.3.1 Alap Spaces and Iteration Spaces..........................................................126 6.3.2 Alapper Search.......................................................................................130 6.3.3 Mapper Alodels and Configuration Generation................................. 130 Analysis Framework for Energy Efficiency........................................................131 6.4.1 Input Data Access Energy Cost............................................................132 6.4.2 Partial Sum Accumulation Energy Cost............................................. 132 6.4.3 Obtaining the Reuse Parameters..........................................................133 Eyexam: Framework for Evaluating Performance............ .................................134 6.5.1 Simple 1-D Convolution Example........................................................136 6.5.2 Apply Performance Analysis Framework to 1-D Example.................137 Tools for Alap Space Exploration...................................................................... 140
PART III Co-Design ofDNN Hardware and Algorithms...........................................................145 Reducing Precision......................................................................................................147 7.1 Benefits of Reduce Precision............................................................................... 147 7.2 Determining the Bit Width.................................................................................149 7.2.1 Quantization......................................................................................... 149 7.2.2 Standard Components of the Bit Width........................................... 154 7.3 7.4 7.5 Mixed Precision: Different Precision for Different Data Types..................... 159 Varying Precision: Change Precision for Different Parts of the DNN.......... 160 Binary Nets......................................................................................................... Î63 7.6 7.7 Interplay Between Precision and Other Design Choices................................. 165 Summary of Design Considerations for Reducing Precision........................... 165 Exploiting Sparsity................................................................................................. .. . 167 8.1 8.2 8.3 8.4 Sources of Sparsity............................................................................................... 167 8.1.1 Activation Sparsity............................................................................... İ6S 8.1.2 Weight
Sparsity....................................................................................... 176 Compression....................................................................................................... 187 8.2.1 Tensor Terminology............................................................................ 188 8.2.2 Classification of Tensor Representations........................................... 193 8.2.3 Representation of Payloads.....................................................................196 8.2.4 Representation Optimizations.............................................................. 196 8.2.5 Tensor Representation Notation .......................................................... 198 Sparse Dataflow................................................................................................... 200 8.3.1 Exploiting Sparse Weights.................................................................... 203 8.3.2 Exploiting Sparse Activations................................................................ 212 8.3.3 Exploiting Sparse Weights and Activations......................................... 215 8.3.4 Exploiting Sparsity in FC Layers........................................................223 8.3.5 Summary of Sparse Dataflows............................................................ 227 Summary .............................................................................................................. 227 Designing Efficient DNN Models........................................................................... 229 9.1 Manual Network
Design..................................................................................... 230 9.1.1 Improving Efficiency of CONV Layers................................................230 9.1.2 Improving Efficiency of FC Layers....................................................... 238
XV 9.1.3 10 9.2 Neural 9.2.1 9.2.2 9.2.3 9.2.4 9.3 9.4 Knowledge Distillation ...................................................................................... 249 Design Considerations for Efficient DNN Models ......................................... 251 Architecture Search................................................................................ 240 Shrinking the Search Space..................................................................242 Improving the Optimization Algorithm............................................. 244 Accelerating the Performance Evaluation........................................... 246 Example oí Neural Architecture Search............................................... 248 Advanced Technologies...............................................................................................253 10.1 10.2 10.3 10.4 И Improving Efficiency of Network Architecture After Training.......239 Processing Near Memory.................................................................................... 254 10.1.1 Embedded High-Density Memories................................................... 255 10.1.2 Stacked Memory (3-D Memory)........................................................ 255 Processing in Memory........................................................................................ 257 10.2.1 Non-Volatile Memories (NVM) ......................................................... 261 10.2.2 Static Random Access Memories (SRAM)......................................... 263 10.2.3 Dynamic Random Access Memories (DRAM)
.................................264 10.2.4 Design Challenges ................................................................................ 267 Processing in Sensor.............................................................................................277 Processing in. the Optical Domain...................................................................... 278 Conclusion.....................................................................................................................281 Bibliography................................................................................................................. 283 Authors’ Biographies.................................................................................................. 317
|
adam_txt |
Contents Preface. xvii Acknowledgments .xxi PART I 1 Introduction.3 1.1 1.2 1.3 1.4 1.5 2 UnderstandingDeep Neural Networks. 1 Background on Deep Neural Networks.3 1.1.1 Artificial Intelligence and Deep Neural Networks. 3 1.1.2 Neural Networks and Deep Neural Networks. 6 Training versus Inference.8 Development History. 11 Applications of DNNs. 13 Embedded versus Cloud .14 Overview of Deep Neural Networks.17 2.1 2.2 2.3 2.4 2.5 2.6 Attributes of Connections Within a Layer. 17 Attributes of Connections Between Layers.19 Popular Types of Layers in
DNNs. 19 2.3.1 CONV Layer (Convolutional). 19 2.3.2 FC Layer (Fully Connected) .23 2.3.3 Nonlinearity .23 2.3.4 Pooling and UnpooLing. 24 2.3.5 Normalization.25 2.3.6 Compound Layers. 26 Convolutional Neural Networks (CNNs).26 2.4.1 Popular CNN Models. 27 Other DNNs . 34 DNN Development Resources . 36 2.6.1 Frameworks.36
2.6.2 2.6.3 2.6.4 2.6.5 PARTII Models. 7 Popular Datasets forClassification. 37 Datasets for Other Tasks .3H Summary. 3li Design of Hardware for Processing DNNs . . 41 Key Metrics and Design Objectives.43 3.1 3.2 3.3 3.4 Accuracy.43 Throughput and Latency.44 Energy Efficiency and Power Consumption. 50 Hardware Cost. 54 3.5 3.6 3.7 Flexibility.55 Scalability.5n Interplay Between Different Metrics.57 Kernel Computation .59 4.1 4.2 Matrix
Multiplication with Toeplitz. 61 Tiling for Optimizing Performance.61 4.3 Computation Transform Optimizations. 67 4.3.1 Gauss’ Complex Multiplication Transform.67 4.3.2 Strassens Matrix Multiplication Transform.65 4.3.3 Winograd Transform. 65 4.3.4 Fast Fourier Transform.70 4.3.5 Selecting a Transform.71 Summary .72 4.4 Designing DNN Accelerators.73 5.1 Evaluation Metrics and Design Objectives. 74 5.2 5.3 Key Properties of DNN to Leverage. 75 DNN Hardware Design Considerations. 77 5.4 Architectural Techniques for Exploiting Data Reuse 5.4.1 Temporal Reuse . 5.4.2 Spatial Reuse. HI Techniques to Reduce Reuse
Distance . 54 5.5
xiii 5.6 Dataflows and Loop Nests.87 5.7 Dataflow Taxonomy.92 5.8 5.9 5.7.1 Weight Stationary (WS). 93 5.7.2 Output Stationary (OS). 95 5.7.3 Input Stationary (IS). 96 5.7.4 Row Stationary (RS). 99 5.7.5 Other Dataflows.105 5.7.6 Dataflows for Cross-Layer Processing.106 DNN Accelerator Buffer Management Strategies.107 5.8.1 Implicit versus Explicit Orchestration.107 5.8.2 Coupled versus Decoupled Orchestration.109 5.8.3 Explicit Decoupled Data Orchestration (EDDO) . 110 Flexible NoC Design for DNN Accelerators.113 5.9.1 5.10 6 Flexible Hierarchical Mesh Network.116 Summary . 118 Operation Mapping on
Specialized Hardware .119 6.1 Mapping and Loop Nests .120 6.2 Mappers and Compilers.123 6.3 Alapper Organization.125 6.4 6.5 6.6 6.3.1 Alap Spaces and Iteration Spaces.126 6.3.2 Alapper Search.130 6.3.3 Mapper Alodels and Configuration Generation. 130 Analysis Framework for Energy Efficiency.131 6.4.1 Input Data Access Energy Cost.132 6.4.2 Partial Sum Accumulation Energy Cost. 132 6.4.3 Obtaining the Reuse Parameters.133 Eyexam: Framework for Evaluating Performance. .134 6.5.1 Simple 1-D Convolution Example.136 6.5.2 Apply Performance Analysis Framework to 1-D Example.137 Tools for Alap Space Exploration. 140
PART III Co-Design ofDNN Hardware and Algorithms.145 Reducing Precision.147 7.1 Benefits of Reduce Precision. 147 7.2 Determining the Bit Width.149 7.2.1 Quantization. 149 7.2.2 Standard Components of the Bit Width. 154 7.3 7.4 7.5 Mixed Precision: Different Precision for Different Data Types. 159 Varying Precision: Change Precision for Different Parts of the DNN. 160 Binary Nets. Î63 7.6 7.7 Interplay Between Precision and Other Design Choices. 165 Summary of Design Considerations for Reducing Precision. 165 Exploiting Sparsity. . . 167 8.1 8.2 8.3 8.4 Sources of Sparsity. 167 8.1.1 Activation Sparsity. İ6S 8.1.2 Weight
Sparsity. 176 Compression. 187 8.2.1 Tensor Terminology. 188 8.2.2 Classification of Tensor Representations. 193 8.2.3 Representation of Payloads.196 8.2.4 Representation Optimizations. 196 8.2.5 Tensor Representation Notation . 198 Sparse Dataflow. 200 8.3.1 Exploiting Sparse Weights. 203 8.3.2 Exploiting Sparse Activations. 212 8.3.3 Exploiting Sparse Weights and Activations. 215 8.3.4 Exploiting Sparsity in FC Layers.223 8.3.5 Summary of Sparse Dataflows. 227 Summary . 227 Designing Efficient DNN Models. 229 9.1 Manual Network
Design. 230 9.1.1 Improving Efficiency of CONV Layers.230 9.1.2 Improving Efficiency of FC Layers. 238
XV 9.1.3 10 9.2 Neural 9.2.1 9.2.2 9.2.3 9.2.4 9.3 9.4 Knowledge Distillation . 249 Design Considerations for Efficient DNN Models . 251 Architecture Search. 240 Shrinking the Search Space.242 Improving the Optimization Algorithm. 244 Accelerating the Performance Evaluation. 246 Example oí Neural Architecture Search. 248 Advanced Technologies.253 10.1 10.2 10.3 10.4 И Improving Efficiency of Network Architecture After Training.239 Processing Near Memory. 254 10.1.1 Embedded High-Density Memories. 255 10.1.2 Stacked Memory (3-D Memory). 255 Processing in Memory. 257 10.2.1 Non-Volatile Memories (NVM) . 261 10.2.2 Static Random Access Memories (SRAM). 263 10.2.3 Dynamic Random Access Memories (DRAM)
.264 10.2.4 Design Challenges . 267 Processing in Sensor.277 Processing in. the Optical Domain. 278 Conclusion.281 Bibliography. 283 Authors’ Biographies. 317 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Sze, Vivienne Chen, Yuhsin Yang, Tien-Ju Elmer, Joel S. |
author_GND | (DE-588)1062747607 (DE-588)1173171770 |
author_facet | Sze, Vivienne Chen, Yuhsin Yang, Tien-Ju Elmer, Joel S. |
author_role | aut aut aut aut |
author_sort | Sze, Vivienne |
author_variant | v s vs y c yc t j y tjy j s e js jse |
building | Verbundindex |
bvnumber | BV046957178 |
classification_rvk | ST 301 |
ctrlnum | (OCoLC)1224011472 (DE-599)OBVAC15635508 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02215nam a2200481 cb4500</leader><controlfield tag="001">BV046957178</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20211007 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">201023s2020 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781681738314</subfield><subfield code="9">9781681738314</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1224011472</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)OBVAC15635508</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-83</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-20</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sze, Vivienne</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1062747607</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Efficient processing of deep neural networks</subfield><subfield code="c">Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research)</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">San Rafael</subfield><subfield code="b">Morgan & Claypool Publishers</subfield><subfield code="c">[2020]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxi, 319 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Synthesis lectures on computer architecture</subfield><subfield code="v">50</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Literaturverzeichnis Seite 283-216</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Yuhsin</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1173171770</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Tien-Ju</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Elmer, Joel S.</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">9781681738321</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Synthesis lectures on computer architecture</subfield><subfield code="v">50</subfield><subfield code="w">(DE-604)BV023068349</subfield><subfield code="9">50</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032365598&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032365598</subfield></datafield></record></collection> |
id | DE-604.BV046957178 |
illustrated | Illustrated |
index_date | 2024-07-03T15:42:54Z |
indexdate | 2024-07-10T08:58:34Z |
institution | BVB |
isbn | 9781681738314 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032365598 |
oclc_num | 1224011472 |
open_access_boolean | |
owner | DE-83 DE-739 DE-20 |
owner_facet | DE-83 DE-739 DE-20 |
physical | xxi, 319 Seiten Illustrationen, Diagramme |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | Morgan & Claypool Publishers |
record_format | marc |
series | Synthesis lectures on computer architecture |
series2 | Synthesis lectures on computer architecture |
spelling | Sze, Vivienne Verfasser (DE-588)1062747607 aut Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) San Rafael Morgan & Claypool Publishers [2020] © 2020 xxi, 319 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier Synthesis lectures on computer architecture 50 Literaturverzeichnis Seite 283-216 Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Deep learning (DE-588)1135597375 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Lernendes System (DE-588)4120666-6 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 s Deep learning (DE-588)1135597375 s Lernendes System (DE-588)4120666-6 s Maschinelles Lernen (DE-588)4193754-5 s DE-604 Chen, Yuhsin Verfasser (DE-588)1173171770 aut Yang, Tien-Ju Verfasser aut Elmer, Joel S. Verfasser aut Erscheint auch als Online-Ausgabe 9781681738321 Synthesis lectures on computer architecture 50 (DE-604)BV023068349 50 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032365598&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Sze, Vivienne Chen, Yuhsin Yang, Tien-Ju Elmer, Joel S. Efficient processing of deep neural networks Synthesis lectures on computer architecture Maschinelles Lernen (DE-588)4193754-5 gnd Deep learning (DE-588)1135597375 gnd Neuronales Netz (DE-588)4226127-2 gnd Lernendes System (DE-588)4120666-6 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)1135597375 (DE-588)4226127-2 (DE-588)4120666-6 |
title | Efficient processing of deep neural networks |
title_auth | Efficient processing of deep neural networks |
title_exact_search | Efficient processing of deep neural networks |
title_exact_search_txtP | Efficient processing of deep neural networks |
title_full | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_fullStr | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_full_unstemmed | Efficient processing of deep neural networks Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang (Massachusetts Institute of Technology), Joel S. Emer (Massachusetts Institute of Technology and Nvidia Research) |
title_short | Efficient processing of deep neural networks |
title_sort | efficient processing of deep neural networks |
topic | Maschinelles Lernen (DE-588)4193754-5 gnd Deep learning (DE-588)1135597375 gnd Neuronales Netz (DE-588)4226127-2 gnd Lernendes System (DE-588)4120666-6 gnd |
topic_facet | Maschinelles Lernen Deep learning Neuronales Netz Lernendes System |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032365598&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
volume_link | (DE-604)BV023068349 |
work_keys_str_mv | AT szevivienne efficientprocessingofdeepneuralnetworks AT chenyuhsin efficientprocessingofdeepneuralnetworks AT yangtienju efficientprocessingofdeepneuralnetworks AT elmerjoels efficientprocessingofdeepneuralnetworks |