Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intell...
Gespeichert in:
Körperschaft: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cham, Switzerland
Springer
[2019]
|
Schriftenreihe: | Lecture notes in artificial intelligence
11700 |
Schlagworte: | |
Online-Zugang: | 1850-9999 |
Zusammenfassung: | The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. -- |
Beschreibung: | "ISSN 1611-3349 (electronic)"--t.p. verso |
Beschreibung: | xi, 438 pages illustrations (black and white, and colour) 24 cm |
ISBN: | 9783030289539 3030289532 |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV046261469 | ||
003 | DE-604 | ||
005 | 20200812 | ||
007 | t | ||
008 | 191119s2019 a||| |||| 10||| eng d | ||
020 | |a 9783030289539 |9 978-3-030-28953-9 | ||
020 | |a 3030289532 |9 3-030-28953-2 | ||
035 | |a (OCoLC)1121483582 | ||
035 | |a (DE-599)BVBBV046261469 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-19 | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
110 | 2 | |a Interpreting, Explaining and Visualizing Deep Learning (Workshop) |d (2017 |c Long Beach, California) |e Verfasser |4 aut | |
245 | 1 | 0 | |a Explainable AI |b Interpreting, Explaining and Visualizing Deep Learning |c Wojciech Samek [and four others], (Eds.) |
264 | 1 | |a Cham, Switzerland |b Springer |c [2019] | |
300 | |a xi, 438 pages |b illustrations (black and white, and colour) |c 24 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a Lecture notes in artificial intelligence |v 11700 | |
490 | 0 | |a State-of-the-art survey | |
490 | 0 | |a LNCS Sublibrary: SL:7 - Artificial Intelligence | |
500 | |a "ISSN 1611-3349 (electronic)"--t.p. verso | ||
520 | 3 | |a The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. -- | |
650 | 0 | 7 | |a Visualisierung |0 (DE-588)4188417-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Deep learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
653 | 0 | |a Neural networks (Computer science) / Congresses | |
653 | 0 | |a Artificial intelligence / Congresses | |
653 | 0 | |a Artificial intelligence | |
653 | 0 | |a Neural networks (Computer science) | |
653 | 6 | |a Conference papers and proceedings | |
655 | 7 | |8 1\p |0 (DE-588)1071861417 |a Konferenzschrift |2 gnd-content | |
689 | 0 | 0 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |D s |
689 | 0 | 1 | |a Deep learning |0 (DE-588)1135597375 |D s |
689 | 0 | 2 | |a Visualisierung |0 (DE-588)4188417-6 |D s |
689 | 0 | |8 2\p |5 DE-604 | |
700 | 1 | |a Samek, Wojciech |e Sonstige |0 (DE-588)1056981113 |4 oth | |
710 | 2 | |a NIPS (Conference) |d 2017 |c Long Beach, California) |e Sonstige |4 oth | |
830 | 0 | |a Lecture notes in artificial intelligence |v 11700 |w (DE-604)BV025243459 |9 11700 | |
856 | 4 | 2 | |u http://link.springer.com |x BLDSS |3 1850-9999 |
999 | |a oai:aleph.bib-bvb.de:BVB01-031639682 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk | |
883 | 1 | |8 2\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804180697126010880 |
---|---|
any_adam_object | |
author_GND | (DE-588)1056981113 |
author_corporate | Interpreting, Explaining and Visualizing Deep Learning (Workshop) |
author_corporate_role | aut |
author_facet | Interpreting, Explaining and Visualizing Deep Learning (Workshop) |
author_sort | Interpreting, Explaining and Visualizing Deep Learning (Workshop) |
building | Verbundindex |
bvnumber | BV046261469 |
classification_rvk | ST 300 |
ctrlnum | (OCoLC)1121483582 (DE-599)BVBBV046261469 |
discipline | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03963nam a2200565 cb4500</leader><controlfield tag="001">BV046261469</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20200812 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">191119s2019 a||| |||| 10||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783030289539</subfield><subfield code="9">978-3-030-28953-9</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">3030289532</subfield><subfield code="9">3-030-28953-2</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1121483582</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV046261469</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-19</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="110" ind1="2" ind2=" "><subfield code="a">Interpreting, Explaining and Visualizing Deep Learning (Workshop)</subfield><subfield code="d">(2017</subfield><subfield code="c">Long Beach, California)</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Explainable AI</subfield><subfield code="b">Interpreting, Explaining and Visualizing Deep Learning</subfield><subfield code="c">Wojciech Samek [and four others], (Eds.)</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham, Switzerland</subfield><subfield code="b">Springer</subfield><subfield code="c">[2019]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xi, 438 pages</subfield><subfield code="b">illustrations (black and white, and colour)</subfield><subfield code="c">24 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Lecture notes in artificial intelligence</subfield><subfield code="v">11700</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">State-of-the-art survey</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">LNCS Sublibrary: SL:7 - Artificial Intelligence</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">"ISSN 1611-3349 (electronic)"--t.p. verso</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. --</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Visualisierung</subfield><subfield code="0">(DE-588)4188417-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science) / Congresses</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Artificial intelligence / Congresses</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="653" ind1=" " ind2="6"><subfield code="a">Conference papers and proceedings</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="8">1\p</subfield><subfield code="0">(DE-588)1071861417</subfield><subfield code="a">Konferenzschrift</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Visualisierung</subfield><subfield code="0">(DE-588)4188417-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="8">2\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Samek, Wojciech</subfield><subfield code="e">Sonstige</subfield><subfield code="0">(DE-588)1056981113</subfield><subfield code="4">oth</subfield></datafield><datafield tag="710" ind1="2" ind2=" "><subfield code="a">NIPS (Conference)</subfield><subfield code="d">2017</subfield><subfield code="c">Long Beach, California)</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Lecture notes in artificial intelligence</subfield><subfield code="v">11700</subfield><subfield code="w">(DE-604)BV025243459</subfield><subfield code="9">11700</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">http://link.springer.com</subfield><subfield code="x">BLDSS</subfield><subfield code="3">1850-9999</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-031639682</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">2\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
genre | 1\p (DE-588)1071861417 Konferenzschrift gnd-content |
genre_facet | Konferenzschrift |
id | DE-604.BV046261469 |
illustrated | Illustrated |
indexdate | 2024-07-10T08:39:52Z |
institution | BVB |
isbn | 9783030289539 3030289532 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-031639682 |
oclc_num | 1121483582 |
open_access_boolean | |
owner | DE-19 DE-BY-UBM |
owner_facet | DE-19 DE-BY-UBM |
physical | xi, 438 pages illustrations (black and white, and colour) 24 cm |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | Springer |
record_format | marc |
series | Lecture notes in artificial intelligence |
series2 | Lecture notes in artificial intelligence State-of-the-art survey LNCS Sublibrary: SL:7 - Artificial Intelligence |
spelling | Interpreting, Explaining and Visualizing Deep Learning (Workshop) (2017 Long Beach, California) Verfasser aut Explainable AI Interpreting, Explaining and Visualizing Deep Learning Wojciech Samek [and four others], (Eds.) Cham, Switzerland Springer [2019] xi, 438 pages illustrations (black and white, and colour) 24 cm txt rdacontent n rdamedia nc rdacarrier Lecture notes in artificial intelligence 11700 State-of-the-art survey LNCS Sublibrary: SL:7 - Artificial Intelligence "ISSN 1611-3349 (electronic)"--t.p. verso The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. -- Visualisierung (DE-588)4188417-6 gnd rswk-swf Künstliche Intelligenz (DE-588)4033447-8 gnd rswk-swf Deep learning (DE-588)1135597375 gnd rswk-swf Neural networks (Computer science) / Congresses Artificial intelligence / Congresses Artificial intelligence Neural networks (Computer science) Conference papers and proceedings 1\p (DE-588)1071861417 Konferenzschrift gnd-content Künstliche Intelligenz (DE-588)4033447-8 s Deep learning (DE-588)1135597375 s Visualisierung (DE-588)4188417-6 s 2\p DE-604 Samek, Wojciech Sonstige (DE-588)1056981113 oth NIPS (Conference) 2017 Long Beach, California) Sonstige oth Lecture notes in artificial intelligence 11700 (DE-604)BV025243459 11700 http://link.springer.com BLDSS 1850-9999 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk 2\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Explainable AI Interpreting, Explaining and Visualizing Deep Learning Lecture notes in artificial intelligence Visualisierung (DE-588)4188417-6 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd Deep learning (DE-588)1135597375 gnd |
subject_GND | (DE-588)4188417-6 (DE-588)4033447-8 (DE-588)1135597375 (DE-588)1071861417 |
title | Explainable AI Interpreting, Explaining and Visualizing Deep Learning |
title_auth | Explainable AI Interpreting, Explaining and Visualizing Deep Learning |
title_exact_search | Explainable AI Interpreting, Explaining and Visualizing Deep Learning |
title_full | Explainable AI Interpreting, Explaining and Visualizing Deep Learning Wojciech Samek [and four others], (Eds.) |
title_fullStr | Explainable AI Interpreting, Explaining and Visualizing Deep Learning Wojciech Samek [and four others], (Eds.) |
title_full_unstemmed | Explainable AI Interpreting, Explaining and Visualizing Deep Learning Wojciech Samek [and four others], (Eds.) |
title_short | Explainable AI |
title_sort | explainable ai interpreting explaining and visualizing deep learning |
title_sub | Interpreting, Explaining and Visualizing Deep Learning |
topic | Visualisierung (DE-588)4188417-6 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd Deep learning (DE-588)1135597375 gnd |
topic_facet | Visualisierung Künstliche Intelligenz Deep learning Konferenzschrift |
url | http://link.springer.com |
volume_link | (DE-604)BV025243459 |
work_keys_str_mv | AT interpretingexplainingandvisualizingdeeplearningworkshop explainableaiinterpretingexplainingandvisualizingdeeplearning AT samekwojciech explainableaiinterpretingexplainingandvisualizingdeeplearning AT nipsconference explainableaiinterpretingexplainingandvisualizingdeeplearning |