Strengthening deep neural networks: making AI less susceptible to adversarial trickery
As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Beijing
O'Reilly
July 2019
|
Ausgabe: | First edition |
Schlagworte: | |
Online-Zugang: | Klappentext Inhaltsverzeichnis |
Zusammenfassung: | As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust |
Beschreibung: | xiii, 227 Seiten Illustrationen, Diagramme |
ISBN: | 9781492044956 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV046228050 | ||
003 | DE-604 | ||
005 | 20200824 | ||
007 | t | ||
008 | 191104s2019 a||| |||| 00||| eng d | ||
020 | |a 9781492044956 |9 978-1-492-04495-6 | ||
035 | |a (OCoLC)1127305476 | ||
035 | |a (DE-599)OBVAC15486270 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-29T |a DE-898 | ||
082 | 0 | |a 006.32 | |
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
084 | |a 54.72 |2 bkl | ||
084 | |a 54.38 |2 bkl | ||
100 | 1 | |a Warr, Katy |e Verfasser |4 aut | |
245 | 1 | 0 | |a Strengthening deep neural networks |b making AI less susceptible to adversarial trickery |c Katy Warr |
250 | |a First edition | ||
264 | 1 | |a Beijing |b O'Reilly |c July 2019 | |
300 | |a xiii, 227 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
520 | |a As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust | ||
650 | 4 | |a Neural networks (Computer science) | |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Robustheit |0 (DE-588)4126481-2 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 1 | |a Robustheit |0 (DE-588)4126481-2 |D s |
689 | 0 | |5 DE-604 | |
856 | 4 | 2 | |m V:AT-OBV;B:AT-UBTUW |q text/plain |u http://media.obvsg.at/AC15486270-3401 |x TUW |3 Klappentext |
856 | 4 | 2 | |m V:AT-OBV;B:AT-UBTUW |q application/pdf |u http://media.obvsg.at/AC15486270-1001 |x TUW |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-031606573 |
Datensatz im Suchindex
_version_ | 1804180635252686848 |
---|---|
any_adam_object | |
author | Warr, Katy |
author_facet | Warr, Katy |
author_role | aut |
author_sort | Warr, Katy |
author_variant | k w kw |
building | Verbundindex |
bvnumber | BV046228050 |
classification_rvk | ST 301 |
ctrlnum | (OCoLC)1127305476 (DE-599)OBVAC15486270 |
dewey-full | 006.32 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.32 |
dewey-search | 006.32 |
dewey-sort | 16.32 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
edition | First edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02476nam a2200421 c 4500</leader><controlfield tag="001">BV046228050</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20200824 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">191104s2019 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781492044956</subfield><subfield code="9">978-1-492-04495-6</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1127305476</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)OBVAC15486270</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-29T</subfield><subfield code="a">DE-898</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.32</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.38</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Warr, Katy</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Strengthening deep neural networks</subfield><subfield code="b">making AI less susceptible to adversarial trickery</subfield><subfield code="c">Katy Warr</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Beijing</subfield><subfield code="b">O'Reilly</subfield><subfield code="c">July 2019</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xiii, 227 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Robustheit</subfield><subfield code="0">(DE-588)4126481-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Robustheit</subfield><subfield code="0">(DE-588)4126481-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">V:AT-OBV;B:AT-UBTUW</subfield><subfield code="q">text/plain</subfield><subfield code="u">http://media.obvsg.at/AC15486270-3401</subfield><subfield code="x">TUW</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">V:AT-OBV;B:AT-UBTUW</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://media.obvsg.at/AC15486270-1001</subfield><subfield code="x">TUW</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-031606573</subfield></datafield></record></collection> |
id | DE-604.BV046228050 |
illustrated | Illustrated |
indexdate | 2024-07-10T08:38:53Z |
institution | BVB |
isbn | 9781492044956 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-031606573 |
oclc_num | 1127305476 |
open_access_boolean | |
owner | DE-29T DE-898 DE-BY-UBR |
owner_facet | DE-29T DE-898 DE-BY-UBR |
physical | xiii, 227 Seiten Illustrationen, Diagramme |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | O'Reilly |
record_format | marc |
spelling | Warr, Katy Verfasser aut Strengthening deep neural networks making AI less susceptible to adversarial trickery Katy Warr First edition Beijing O'Reilly July 2019 xiii, 227 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust Neural networks (Computer science) Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Robustheit (DE-588)4126481-2 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 s Robustheit (DE-588)4126481-2 s DE-604 V:AT-OBV;B:AT-UBTUW text/plain http://media.obvsg.at/AC15486270-3401 TUW Klappentext V:AT-OBV;B:AT-UBTUW application/pdf http://media.obvsg.at/AC15486270-1001 TUW Inhaltsverzeichnis |
spellingShingle | Warr, Katy Strengthening deep neural networks making AI less susceptible to adversarial trickery Neural networks (Computer science) Neuronales Netz (DE-588)4226127-2 gnd Robustheit (DE-588)4126481-2 gnd |
subject_GND | (DE-588)4226127-2 (DE-588)4126481-2 |
title | Strengthening deep neural networks making AI less susceptible to adversarial trickery |
title_auth | Strengthening deep neural networks making AI less susceptible to adversarial trickery |
title_exact_search | Strengthening deep neural networks making AI less susceptible to adversarial trickery |
title_full | Strengthening deep neural networks making AI less susceptible to adversarial trickery Katy Warr |
title_fullStr | Strengthening deep neural networks making AI less susceptible to adversarial trickery Katy Warr |
title_full_unstemmed | Strengthening deep neural networks making AI less susceptible to adversarial trickery Katy Warr |
title_short | Strengthening deep neural networks |
title_sort | strengthening deep neural networks making ai less susceptible to adversarial trickery |
title_sub | making AI less susceptible to adversarial trickery |
topic | Neural networks (Computer science) Neuronales Netz (DE-588)4226127-2 gnd Robustheit (DE-588)4126481-2 gnd |
topic_facet | Neural networks (Computer science) Neuronales Netz Robustheit |
url | http://media.obvsg.at/AC15486270-3401 http://media.obvsg.at/AC15486270-1001 |
work_keys_str_mv | AT warrkaty strengtheningdeepneuralnetworksmakingailesssusceptibletoadversarialtrickery |