Adversarial machine learning:
The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompa...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
[San Rafael]
Morgan & Claypool
2018
|
Schriftenreihe: | Synthesis lectures on artificial intelligence and machine learning
#38 |
Schlagworte: | |
Online-Zugang: | UBR01 FHI01 Volltext |
Zusammenfassung: | The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicious objects they develop. The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning.We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings |
Beschreibung: | Part of: Synthesis digital library of engineering and computer science Title from PDF title page (viewed on August 29, 2018) |
Beschreibung: | 1 Online-Resource (xvii, 152 Seiten) Illustrationen |
ISBN: | 9781681733968 9783031015809 |
DOI: | 10.2200/S00861ED1V01Y201806AIM039 |
Internformat
MARC
LEADER | 00000nmm a2200000zcb4500 | ||
---|---|---|---|
001 | BV046427621 | ||
003 | DE-604 | ||
005 | 20220727 | ||
007 | cr|uuu---uuuuu | ||
008 | 200217s2018 |||| o||u| ||||||eng d | ||
020 | |a 9781681733968 |9 978-1-68173-396-8 | ||
020 | |a 9783031015809 |c PDF Springer |9 978-3-031-01580-9 | ||
024 | 7 | |a 10.2200/S00861ED1V01Y201806AIM039 |2 doi | |
024 | 7 | |a 10.1007/978-3-031-01580-9 |2 doi | |
035 | |a (ZDB-105-MCS)8436571 | ||
035 | |a (OCoLC)1141128242 | ||
035 | |a (DE-599)BVBBV046427621 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-355 |a DE-573 | ||
082 | 0 | |a 006.31 |2 23 | |
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
100 | 1 | |a Vorobeychik, Yevgeniy |e Verfasser |0 (DE-588)134050258 |4 aut | |
245 | 1 | 0 | |a Adversarial machine learning |c Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas) |
264 | 1 | |a [San Rafael] |b Morgan & Claypool |c 2018 | |
300 | |a 1 Online-Resource (xvii, 152 Seiten) |b Illustrationen | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 1 | |a Synthesis lectures on artificial intelligence and machine learning |v #38 | |
500 | |a Part of: Synthesis digital library of engineering and computer science | ||
500 | |a Title from PDF title page (viewed on August 29, 2018) | ||
520 | |a The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. | ||
520 | |a The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicious objects they develop. The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning.We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. | ||
520 | |a In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings | ||
650 | 4 | |a Machine learning | |
700 | 1 | |a Kantarcioglu, Murat |e Sonstige |0 (DE-588)1207569208 |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 9781681733951 |z 9781681733975 |z 978-3-031-00452-0 |z 978-3-031-00025-6 |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe, EPUB |z 978-3-031-02708-6 |
830 | 0 | |a Synthesis lectures on artificial intelligence and machine learning |v #38 |w (DE-604)BV043983076 |9 38 | |
856 | 4 | 0 | |u https://doi.org/10.2200/S00861ED1V01Y201806AIM039 |x Verlag |z URL des Erstveröffentlichers |3 Volltext |
912 | |a ZDB-105-MCB |a ZDB-105-MCS |a ZDB-2-SXSC | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-031839924 | ||
966 | e | |u https://www.doi.org/10.1007/978-3-031-01580-9 |l UBR01 |p ZDB-105-MCB |q UBR_Pick&Choose 2022 |x Verlag |3 Volltext | |
966 | e | |u https://doi.org/10.1007/978-3-031-01580-9 |l FHI01 |p ZDB-2-SXSC |x Verlag |3 Volltext |
Datensatz im Suchindex
_version_ | 1804180976804298752 |
---|---|
any_adam_object | |
author | Vorobeychik, Yevgeniy |
author_GND | (DE-588)134050258 (DE-588)1207569208 |
author_facet | Vorobeychik, Yevgeniy |
author_role | aut |
author_sort | Vorobeychik, Yevgeniy |
author_variant | y v yv |
building | Verbundindex |
bvnumber | BV046427621 |
classification_rvk | ST 300 |
collection | ZDB-105-MCB ZDB-105-MCS ZDB-2-SXSC |
ctrlnum | (ZDB-105-MCS)8436571 (OCoLC)1141128242 (DE-599)BVBBV046427621 |
dewey-full | 006.31 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.31 |
dewey-search | 006.31 |
dewey-sort | 16.31 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
doi_str_mv | 10.2200/S00861ED1V01Y201806AIM039 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>04641nmm a2200505zcb4500</leader><controlfield tag="001">BV046427621</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220727 </controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">200217s2018 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781681733968</subfield><subfield code="9">978-1-68173-396-8</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783031015809</subfield><subfield code="c">PDF Springer</subfield><subfield code="9">978-3-031-01580-9</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.2200/S00861ED1V01Y201806AIM039</subfield><subfield code="2">doi</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/978-3-031-01580-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-105-MCS)8436571</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1141128242</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV046427621</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-573</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield><subfield code="2">23</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Vorobeychik, Yevgeniy</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)134050258</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Adversarial machine learning</subfield><subfield code="c">Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas)</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[San Rafael]</subfield><subfield code="b">Morgan & Claypool</subfield><subfield code="c">2018</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Resource (xvii, 152 Seiten)</subfield><subfield code="b">Illustrationen</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Synthesis lectures on artificial intelligence and machine learning</subfield><subfield code="v">#38</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Part of: Synthesis digital library of engineering and computer science</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Title from PDF title page (viewed on August 29, 2018)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicious objects they develop. The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning.We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kantarcioglu, Murat</subfield><subfield code="e">Sonstige</subfield><subfield code="0">(DE-588)1207569208</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">9781681733951</subfield><subfield code="z">9781681733975</subfield><subfield code="z">978-3-031-00452-0</subfield><subfield code="z">978-3-031-00025-6</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe, EPUB</subfield><subfield code="z">978-3-031-02708-6</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Synthesis lectures on artificial intelligence and machine learning</subfield><subfield code="v">#38</subfield><subfield code="w">(DE-604)BV043983076</subfield><subfield code="9">38</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.2200/S00861ED1V01Y201806AIM039</subfield><subfield code="x">Verlag</subfield><subfield code="z">URL des Erstveröffentlichers</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-105-MCB</subfield><subfield code="a">ZDB-105-MCS</subfield><subfield code="a">ZDB-2-SXSC</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-031839924</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://www.doi.org/10.1007/978-3-031-01580-9</subfield><subfield code="l">UBR01</subfield><subfield code="p">ZDB-105-MCB</subfield><subfield code="q">UBR_Pick&Choose 2022</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://doi.org/10.1007/978-3-031-01580-9</subfield><subfield code="l">FHI01</subfield><subfield code="p">ZDB-2-SXSC</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV046427621 |
illustrated | Not Illustrated |
indexdate | 2024-07-10T08:44:19Z |
institution | BVB |
isbn | 9781681733968 9783031015809 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-031839924 |
oclc_num | 1141128242 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-573 |
owner_facet | DE-355 DE-BY-UBR DE-573 |
physical | 1 Online-Resource (xvii, 152 Seiten) Illustrationen |
psigel | ZDB-105-MCB ZDB-105-MCS ZDB-2-SXSC ZDB-105-MCB UBR_Pick&Choose 2022 |
publishDate | 2018 |
publishDateSearch | 2018 |
publishDateSort | 2018 |
publisher | Morgan & Claypool |
record_format | marc |
series | Synthesis lectures on artificial intelligence and machine learning |
series2 | Synthesis lectures on artificial intelligence and machine learning |
spelling | Vorobeychik, Yevgeniy Verfasser (DE-588)134050258 aut Adversarial machine learning Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas) [San Rafael] Morgan & Claypool 2018 1 Online-Resource (xvii, 152 Seiten) Illustrationen txt rdacontent c rdamedia cr rdacarrier Synthesis lectures on artificial intelligence and machine learning #38 Part of: Synthesis digital library of engineering and computer science Title from PDF title page (viewed on August 29, 2018) The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicious objects they develop. The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning.We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings Machine learning Kantarcioglu, Murat Sonstige (DE-588)1207569208 oth Erscheint auch als Druck-Ausgabe 9781681733951 9781681733975 978-3-031-00452-0 978-3-031-00025-6 Erscheint auch als Online-Ausgabe, EPUB 978-3-031-02708-6 Synthesis lectures on artificial intelligence and machine learning #38 (DE-604)BV043983076 38 https://doi.org/10.2200/S00861ED1V01Y201806AIM039 Verlag URL des Erstveröffentlichers Volltext |
spellingShingle | Vorobeychik, Yevgeniy Adversarial machine learning Synthesis lectures on artificial intelligence and machine learning Machine learning |
title | Adversarial machine learning |
title_auth | Adversarial machine learning |
title_exact_search | Adversarial machine learning |
title_full | Adversarial machine learning Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas) |
title_fullStr | Adversarial machine learning Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas) |
title_full_unstemmed | Adversarial machine learning Yevgeniy Vorobeychik (Vanderbuilt University), Murat Kantarcioglu (University of Texas, Dallas) |
title_short | Adversarial machine learning |
title_sort | adversarial machine learning |
topic | Machine learning |
topic_facet | Machine learning |
url | https://doi.org/10.2200/S00861ED1V01Y201806AIM039 |
volume_link | (DE-604)BV043983076 |
work_keys_str_mv | AT vorobeychikyevgeniy adversarialmachinelearning AT kantarcioglumurat adversarialmachinelearning |