Accelerators for convolutional neural networks:
Accelerators for Convolutional Neural Networks Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Hoboken, NJ
John Wiley & Sons
[2024]
New Jersey IEEE Press |
Online-Zugang: | DE-573 ZDB-35-WIC Volltext |
Zusammenfassung: | Accelerators for Convolutional Neural Networks Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convolutional neural network (CNN) accelerators for the Internet of things (IoT) and edge computing practitioners, elucidating compressive coding for CNNs, presenting a two-step lossless input feature maps compression method, discussing arithmetic coding -based lossless weights compression method and the design of an associated decoding method, describing contemporary sparse CNNs that consider sparsity in both weights and activation maps, and discussing hardware/software co-design and co-scheduling techniques that can lead to better optimization and utilization of the available hardware resources for CNN acceleration. The first part of the book provides an overview of CNNs along with the composition and parameters of different contemporary CNN models. Later chapters focus on compressive coding for CNNs and the design of dense CNN accelerators. The book also provides directions for future research and development for CNN accelerators. Other sample topics covered in Accelerators for Convolutional Neural Networks include: How to apply arithmetic coding and decoding with range scaling for lossless weight compression for 5-bit CNN weights to deploy CNNs in extremely resource-constrained systems State-of-the-art research surrounding dense CNN accelerators, which are mostly based on systolic arrays or parallel multiply-accumulate (MAC) arrays iMAC dense CNN accelerator, which combines image-to-column (im2col) and general matrix multiplication (GEMM) hardware acceleration Multi-threaded, low-cost, log-based processing element (PE) core, instances of which are stacked in a spatial grid to engender NeuroMAX dense accelerator Sparse-PE, a multi-threaded and flexible CNN PE core that exploits sparsity in both weights and activation maps, instances of which can be stacked in a spatial grid for engendering sparse CNN accelerators For researchers in AI, computer vision, computer architecture, and embedded systems, |
Beschreibung: | 1 Online-Ressource (xvi, 288 Seiten) Illustrationen, Diagramme |
ISBN: | 9781394171910 9781394171897 |
Internformat
MARC
LEADER | 00000nmm a22000001c 4500 | ||
---|---|---|---|
001 | BV049579153 | ||
003 | DE-604 | ||
005 | 20240716 | ||
007 | cr|uuu---uuuuu | ||
008 | 240221s2024 |||| o||u| ||||||eng d | ||
020 | |a 9781394171910 |c obook |9 9781394171910 | ||
020 | |a 9781394171897 |c pdf |9 9781394171897 | ||
024 | 7 | |a 10.1002/9781394171910 |2 doi | |
035 | |a (ZDB-35-WIC)9781394171910 | ||
035 | |a (OCoLC)1427329861 | ||
035 | |a (DE-599)HEB514419962 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-92 |a DE-573 | ||
100 | 1 | |a Munir, Arslan |e Verfasser |4 aut | |
245 | 1 | 0 | |a Accelerators for convolutional neural networks |c Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi |
264 | 1 | |a Hoboken, NJ |b John Wiley & Sons |c [2024] | |
264 | 1 | |a New Jersey |b IEEE Press | |
264 | 2 | |a Hoboken, NJ |b Wiley | |
300 | |a 1 Online-Ressource (xvi, 288 Seiten) |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
520 | 3 | |a Accelerators for Convolutional Neural Networks Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convolutional neural network (CNN) accelerators for the Internet of things (IoT) and edge computing practitioners, elucidating compressive coding for CNNs, presenting a two-step lossless input feature maps compression method, discussing arithmetic coding -based lossless weights compression method and the design of an associated decoding method, describing contemporary sparse CNNs that consider sparsity in both weights and activation maps, and discussing hardware/software co-design and co-scheduling techniques that can lead to better optimization and utilization of the available hardware resources for CNN acceleration. | |
520 | 3 | |a The first part of the book provides an overview of CNNs along with the composition and parameters of different contemporary CNN models. Later chapters focus on compressive coding for CNNs and the design of dense CNN accelerators. The book also provides directions for future research and development for CNN accelerators. | |
520 | 3 | |a Other sample topics covered in Accelerators for Convolutional Neural Networks include: How to apply arithmetic coding and decoding with range scaling for lossless weight compression for 5-bit CNN weights to deploy CNNs in extremely resource-constrained systems State-of-the-art research surrounding dense CNN accelerators, which are mostly based on systolic arrays or parallel multiply-accumulate (MAC) arrays iMAC dense CNN accelerator, which combines image-to-column (im2col) and general matrix multiplication (GEMM) hardware acceleration Multi-threaded, low-cost, log-based processing element (PE) core, instances of which are stacked in a spatial grid to engender NeuroMAX dense accelerator Sparse-PE, a multi-threaded and flexible CNN PE core that exploits sparsity in both weights and activation maps, instances of which can be stacked in a spatial grid for engendering sparse CNN accelerators For researchers in AI, computer vision, computer architecture, and embedded systems, | |
700 | 1 | |a Kong, Joonho |e Verfasser |4 aut | |
700 | 1 | |a Qureshi, Mahmood Azhar |e Verfasser |4 aut | |
776 | 0 | |z 9781394171880 | |
776 | 0 | |z 1394171889 | |
856 | 4 | 0 | |u https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910 |x Verlag |z URL des Erstveröffentlichers |3 Volltext |
912 | |a ZDB-35-WIC | ||
940 | 1 | |q ZDB-35-WIC_2024 | |
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-034924111 | |
966 | e | |u https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910 |l DE-573 |p ZDB-35-WIC |x Verlag |3 Volltext | |
966 | e | |u https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910 |l ZDB-35-WIC |p 9781394171910 |x Verlag |3 Volltext |
Datensatz im Suchindex
_version_ | 1805086673508237312 |
---|---|
adam_text | |
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Munir, Arslan Kong, Joonho Qureshi, Mahmood Azhar |
author_facet | Munir, Arslan Kong, Joonho Qureshi, Mahmood Azhar |
author_role | aut aut aut |
author_sort | Munir, Arslan |
author_variant | a m am j k jk m a q ma maq |
building | Verbundindex |
bvnumber | BV049579153 |
collection | ZDB-35-WIC |
ctrlnum | (ZDB-35-WIC)9781394171910 (OCoLC)1427329861 (DE-599)HEB514419962 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nmm a22000001c 4500</leader><controlfield tag="001">BV049579153</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240716</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">240221s2024 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781394171910</subfield><subfield code="c">obook</subfield><subfield code="9">9781394171910</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781394171897</subfield><subfield code="c">pdf</subfield><subfield code="9">9781394171897</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1002/9781394171910</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-35-WIC)9781394171910</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1427329861</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)HEB514419962</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-92</subfield><subfield code="a">DE-573</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Munir, Arslan</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Accelerators for convolutional neural networks</subfield><subfield code="c">Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Hoboken, NJ</subfield><subfield code="b">John Wiley & Sons</subfield><subfield code="c">[2024]</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New Jersey</subfield><subfield code="b">IEEE Press</subfield></datafield><datafield tag="264" ind1=" " ind2="2"><subfield code="a">Hoboken, NJ</subfield><subfield code="b">Wiley</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (xvi, 288 Seiten)</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Accelerators for Convolutional Neural Networks Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convolutional neural network (CNN) accelerators for the Internet of things (IoT) and edge computing practitioners, elucidating compressive coding for CNNs, presenting a two-step lossless input feature maps compression method, discussing arithmetic coding -based lossless weights compression method and the design of an associated decoding method, describing contemporary sparse CNNs that consider sparsity in both weights and activation maps, and discussing hardware/software co-design and co-scheduling techniques that can lead to better optimization and utilization of the available hardware resources for CNN acceleration.</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">The first part of the book provides an overview of CNNs along with the composition and parameters of different contemporary CNN models. Later chapters focus on compressive coding for CNNs and the design of dense CNN accelerators. The book also provides directions for future research and development for CNN accelerators.</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Other sample topics covered in Accelerators for Convolutional Neural Networks include: How to apply arithmetic coding and decoding with range scaling for lossless weight compression for 5-bit CNN weights to deploy CNNs in extremely resource-constrained systems State-of-the-art research surrounding dense CNN accelerators, which are mostly based on systolic arrays or parallel multiply-accumulate (MAC) arrays iMAC dense CNN accelerator, which combines image-to-column (im2col) and general matrix multiplication (GEMM) hardware acceleration Multi-threaded, low-cost, log-based processing element (PE) core, instances of which are stacked in a spatial grid to engender NeuroMAX dense accelerator Sparse-PE, a multi-threaded and flexible CNN PE core that exploits sparsity in both weights and activation maps, instances of which can be stacked in a spatial grid for engendering sparse CNN accelerators For researchers in AI, computer vision, computer architecture, and embedded systems,</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kong, Joonho</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Qureshi, Mahmood Azhar</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2=" "><subfield code="z">9781394171880</subfield></datafield><datafield tag="776" ind1="0" ind2=" "><subfield code="z">1394171889</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910</subfield><subfield code="x">Verlag</subfield><subfield code="z">URL des Erstveröffentlichers</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-35-WIC</subfield></datafield><datafield tag="940" ind1="1" ind2=" "><subfield code="q">ZDB-35-WIC_2024</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-034924111</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910</subfield><subfield code="l">DE-573</subfield><subfield code="p">ZDB-35-WIC</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910</subfield><subfield code="l">ZDB-35-WIC</subfield><subfield code="p">9781394171910</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV049579153 |
illustrated | Not Illustrated |
index_date | 2024-07-03T23:31:52Z |
indexdate | 2024-07-20T08:39:59Z |
institution | BVB |
isbn | 9781394171910 9781394171897 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-034924111 |
oclc_num | 1427329861 |
open_access_boolean | |
owner | DE-92 DE-573 |
owner_facet | DE-92 DE-573 |
physical | 1 Online-Ressource (xvi, 288 Seiten) Illustrationen, Diagramme |
psigel | ZDB-35-WIC ZDB-35-WIC_2024 9781394171910 |
publishDate | 2024 |
publishDateSearch | 2024 |
publishDateSort | 2024 |
publisher | John Wiley & Sons IEEE Press |
record_format | marc |
spelling | Munir, Arslan Verfasser aut Accelerators for convolutional neural networks Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi Hoboken, NJ John Wiley & Sons [2024] New Jersey IEEE Press Hoboken, NJ Wiley 1 Online-Ressource (xvi, 288 Seiten) Illustrationen, Diagramme txt rdacontent c rdamedia cr rdacarrier Accelerators for Convolutional Neural Networks Comprehensive and thorough resource exploring different types of convolutional neural networks and complementary accelerators Accelerators for Convolutional Neural Networks provides basic deep learning knowledge and instructive content to build up convolutional neural network (CNN) accelerators for the Internet of things (IoT) and edge computing practitioners, elucidating compressive coding for CNNs, presenting a two-step lossless input feature maps compression method, discussing arithmetic coding -based lossless weights compression method and the design of an associated decoding method, describing contemporary sparse CNNs that consider sparsity in both weights and activation maps, and discussing hardware/software co-design and co-scheduling techniques that can lead to better optimization and utilization of the available hardware resources for CNN acceleration. The first part of the book provides an overview of CNNs along with the composition and parameters of different contemporary CNN models. Later chapters focus on compressive coding for CNNs and the design of dense CNN accelerators. The book also provides directions for future research and development for CNN accelerators. Other sample topics covered in Accelerators for Convolutional Neural Networks include: How to apply arithmetic coding and decoding with range scaling for lossless weight compression for 5-bit CNN weights to deploy CNNs in extremely resource-constrained systems State-of-the-art research surrounding dense CNN accelerators, which are mostly based on systolic arrays or parallel multiply-accumulate (MAC) arrays iMAC dense CNN accelerator, which combines image-to-column (im2col) and general matrix multiplication (GEMM) hardware acceleration Multi-threaded, low-cost, log-based processing element (PE) core, instances of which are stacked in a spatial grid to engender NeuroMAX dense accelerator Sparse-PE, a multi-threaded and flexible CNN PE core that exploits sparsity in both weights and activation maps, instances of which can be stacked in a spatial grid for engendering sparse CNN accelerators For researchers in AI, computer vision, computer architecture, and embedded systems, Kong, Joonho Verfasser aut Qureshi, Mahmood Azhar Verfasser aut 9781394171880 1394171889 https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910 Verlag URL des Erstveröffentlichers Volltext |
spellingShingle | Munir, Arslan Kong, Joonho Qureshi, Mahmood Azhar Accelerators for convolutional neural networks |
title | Accelerators for convolutional neural networks |
title_auth | Accelerators for convolutional neural networks |
title_exact_search | Accelerators for convolutional neural networks |
title_exact_search_txtP | Accelerators for convolutional neural networks |
title_full | Accelerators for convolutional neural networks Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi |
title_fullStr | Accelerators for convolutional neural networks Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi |
title_full_unstemmed | Accelerators for convolutional neural networks Arslan Munir, Joonho Kong, Mahmood Azhar Qureshi |
title_short | Accelerators for convolutional neural networks |
title_sort | accelerators for convolutional neural networks |
url | https://onlinelibrary.wiley.com/doi/book/10.1002/9781394171910 |
work_keys_str_mv | AT munirarslan acceleratorsforconvolutionalneuralnetworks AT kongjoonho acceleratorsforconvolutionalneuralnetworks AT qureshimahmoodazhar acceleratorsforconvolutionalneuralnetworks |