Transparency and interpretability for learned representations of artificial neural networks:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Wiesbaden
Springer Vieweg
[2022]
|
Ausgabe: | 1st edition 2023 |
Schriftenreihe: | Research
|
Schlagworte: | |
Online-Zugang: | Inhaltstext http://www.springer.com/ Inhaltsverzeichnis |
Beschreibung: | Illustrationen, Diagramme 21 cm x 14.8 cm |
ISBN: | 9783658400033 365840003X |
Internformat
MARC
LEADER | 00000nam a22000008c 4500 | ||
---|---|---|---|
001 | BV049452636 | ||
003 | DE-604 | ||
005 | 20240716 | ||
007 | t| | ||
008 | 231205s2022 gw a||| |||| 00||| eng d | ||
016 | 7 | |a 1271554054 |2 DE-101 | |
020 | |a 9783658400033 |c Festeinband : circa EUR 80.24 (DE) (freier Preis), circa EUR 82.49 (AT) (freier Preis), circa CHF 88.50 (freier Preis), circa EUR 74.99 |9 978-3-658-40003-3 | ||
020 | |a 365840003X |9 3-658-40003-X | ||
035 | |a (OCoLC)1349705092 | ||
035 | |a (DE-599)DNB1271554054 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
044 | |a gw |c XA-DE-HE | ||
049 | |a DE-83 | ||
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
084 | |8 1\p |a 004 |2 23sdnb | ||
100 | 1 | |a Meyes, Richard |d 1989- |e Verfasser |0 (DE-588)1276449453 |4 aut | |
245 | 1 | 0 | |a Transparency and interpretability for learned representations of artificial neural networks |c Richard Meyes |
263 | |a 202304 | ||
264 | 1 | |a Wiesbaden |b Springer Vieweg |c [2022] | |
264 | 4 | |c © 2022 | |
300 | |b Illustrationen, Diagramme |c 21 cm x 14.8 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Research | |
650 | 0 | 7 | |a Erklärbare künstliche Intelligenz |0 (DE-588)1263068472 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
653 | |a Transparency | ||
653 | |a Interpretability | ||
653 | |a Explainability | ||
653 | |a Learned Representation | ||
653 | |a XAI | ||
653 | |a Explainable AI | ||
653 | |a Artificial Neural Networks | ||
653 | |a Deep Learning | ||
653 | |a Digital Transformation | ||
653 | |a Neuroscience | ||
689 | 0 | 0 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 1 | |a Erklärbare künstliche Intelligenz |0 (DE-588)1263068472 |D s |
689 | 0 | |5 DE-604 | |
710 | 2 | |a Springer Fachmedien Wiesbaden |0 (DE-588)1043386068 |4 pbl | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 9783658400040 |
856 | 4 | 2 | |m X:MVB |q text/html |u http://deposit.dnb.de/cgi-bin/dokserv?id=810da808ab594ca7819a0d5c28ee39a4&prov=M&dok_var=1&dok_ext=htm |3 Inhaltstext |
856 | 4 | 2 | |m X:MVB |u http://www.springer.com/ |
856 | 4 | 2 | |m DNB Datenaustausch |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034798497&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
883 | 1 | |8 1\p |a vlb |d 20221029 |q DE-101 |u https://d-nb.info/provenance/plan#vlb | |
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-034798497 |
Datensatz im Suchindex
_version_ | 1818330312556412928 |
---|---|
adam_text |
CONTENTS
1
INTRODUCTION
.
1
1.1
OBJECT
OF
INVESTIGATION
.
2
1.2
RESEARCH
QUESTIONS
.
3
1.3
STRUCTURE
OF
THE
THESIS
.
6
2
BACKGROUND
&
FOUNDATIONS
.
9
2.1
A
SHORT
HISTORY
OF
AI
RESEARCH
.
9
2.1.1
THE
EARLY
YEARS
.
10
2.1.2
THE
GOLDEN
AGES
.
11
2.1.3
THE
AL
WINTER
.
13
2.1.4
THE
AL
RENAISSANCE
.
13
2.2
THE
MODERN
ERA
OF
AI
RESEARCH
.
14
2.2.1
DEEP
LEARNING
FOR
COMPUTER
VISION
.
15
2.2.2
COMPUTER
VISION
BEYOND
THE
ILSVRC
.
17
2.2.3
FROM
SUPERVISED
LEARNING
TO
REINFORCEMENT
LEARNING
.
18
2.2.4
DEEP
REINFORCEMENT
LEARNING
BREAKTHROUGHS
FOR
VIDEO
AND
BOARD
GAMES
.
19
2.2.5
TACKLING
GAMES
WITH
IMPERFECT
INFORMATION
.
23
2.3
TOWARDS
RESEARCH
ON
TRANSPARENCY
&
INTERPRETABILITY
.
24
2.3.1
A
SHIFT
OF
PARADIGM:
FROM
OPTIMIZING
TO
UNDERSTANDING
.
25
2.3.2
INSPIRATIONS
FROM
NEUROSCIENCE
RESEARCH
.
27
3
METHODS
AND
TERMINOLOGY
.
31
3.1
LEARNED
REPRESENTATIONS
.
31
3.1.1
INVESTIGATING
LEARNED
REPRESENTATIONS
.
33
VIII
CONTENTS
3.1.1.1
STATISTICAL
MEASURES
.
34
3.1.1.2
TRANSFORMATIONS
&
EMBEDDINGS
.
35
3.1.2
VISUALIZING
STRUCTURE
OF
THE
LEARNED
REPRESENTATIONS
.
46
3.1.2.1
MAGNITUDE
OF
NEURON
ACTIVATIONS
.
47
3.1.2.2
SELECTIVITY
OF
NEURON
ACTIVATIONS
.
47
3.1.2.3
ABLATIONS
OF
INDIVIDUAL
NEURONS
.
48
3.1.2.4
GINI
IMPORTANCE
.
51
3.2
DELIMITATION
OF
THE
OBJECT
OF
INVESTIGATION
.
51
3.2.1
TRANSPARENCY
FOR
COMPUTER
VISION
MODELS
.
52
3.2.2
TRANSPARENCY
FOR
MOTION
CONTROL
MODELS
.
53
3.2.3
TRANSFER
TO
AN
INDUSTRIAL
APPLICATION
SCENARIO
.
55
4
RELATED
WORK
.
57
4.1
RELATIONSHIP
BETWEEN
A
NETWORK
'
S
INPUT
FEATURES
AND
ITS
OUTPUT
.
59
4.2
VISUALIZATION
OF
NETWORK
PROPERTIES
AND
GRAPHICAL
USER
INTERFACES
.
61
4.3
INVESTIGATING
THE
IMPORTANCE
OF
INDIVIDUAL
NETWORK
COMPONENTS
.
64
4.3.1
MISCELLANEOUS
CONTRIBUTIONS
.
65
4.3.2
ABLATIONS
STUDIES
.
67
4.3.3
REVERSE
ENGINEERING
OF
NEURAL
NETWORKS
.
69
5
RESEARCH
STUDIES
.
71
5.1
INVESTIGATING
LEARNED
REPRESENTATIONS
IN
COMPUTER
VISION
.
75
5.1.1
RESEARCH
STUDY
1:
CHARACTERIZING
SINGLE
NEURONS
IN
A
SHALLOW
MLP
.
75
5.1.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
75
5.1.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
77
5.1.1.3
RESULTS
.
78
5.1.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
88
5.1.2
RESEARCH
STUDY
2:
NETWORK
ABLATIONS
IN
A
DEEP
NEURAL
NETWORK
.
91
5.1.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
91
5.1.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
92
5.1.2.3
RESULTS
.
93
5.1.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
98
CONTENTS
IX
5.1.3
RESEARCH
STUDY
3:
FUNCTIONAL
NEURON
POPULATIONS
IN
CUSTOM-MADE
CNNS
.
100
5.1.3.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
100
5.1.3.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
101
5.1.3.3
RESULTS
.
105
5.1.3.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
113
5.2
INVESTIGATING
LEARNED
REPRESENTATIONS
IN
MOTOR
CONTROL
.
114
5.2.1
RESEARCH
STUDY
4:
INFLUENCE
OF
NETWORK
ABLATIONS
ON
ACTIVATION
PATTERNS
.
115
5.2.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
116
5.2.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
117
5.2.1.3
RESULTS
.
119
5.2.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
128
5.2.2
RESEARCH
STUDY
5:
RELATION
BETWEEN
NEURAL
ACTIVATIONS
AND
AGENT
BEHAVIOR
.
129
5.2.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
130
5.2.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
131
5.2.2.3
RESULTS
.
135
5.2.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
147
6
TRANSFER
STUDIES
.
149
6.1
TRANSFER
STUDY
1:
NETWORK
ABLATIONS
FOR
DEEP
DRAWING
.
150
6.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
151
6.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
151
6.1.3
RESULTS
.
159
6.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
164
6.2
TRANSFER
STUDY
2:
ATTENTION
MECHANISMS
FOR
DEEP
DRAWING
.
166
6.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
167
6.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
167
6.2.3
RESULTS
.
173
6.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
181
X
CONTENTS
7
CRITICAL
REFLECTION
&
OUTLOOK
.
183
7,1
REFLECTION
OF
RESULTS
&
CONTRIBUTION
TO
RESEARCH
QUESTIONS
.
183
7.1.1
RESEARCH
QUESTION
1
.
184
7.1.2
RESEARCH
QUESTION
2
.
185
7.1.3
RESEARCH
QUESTION
3
.
187
7.1.4
RESEARCH
QUESTION
4
.
189
7.2
FUTURE
RESEARCH
DIRECTIONS
.
191
8
SUMMARY
.
195
REFERENCES
.
197 |
adam_txt |
CONTENTS
1
INTRODUCTION
.
1
1.1
OBJECT
OF
INVESTIGATION
.
2
1.2
RESEARCH
QUESTIONS
.
3
1.3
STRUCTURE
OF
THE
THESIS
.
6
2
BACKGROUND
&
FOUNDATIONS
.
9
2.1
A
SHORT
HISTORY
OF
AI
RESEARCH
.
9
2.1.1
THE
EARLY
YEARS
.
10
2.1.2
THE
GOLDEN
AGES
.
11
2.1.3
THE
AL
WINTER
.
13
2.1.4
THE
AL
RENAISSANCE
.
13
2.2
THE
MODERN
ERA
OF
AI
RESEARCH
.
14
2.2.1
DEEP
LEARNING
FOR
COMPUTER
VISION
.
15
2.2.2
COMPUTER
VISION
BEYOND
THE
ILSVRC
.
17
2.2.3
FROM
SUPERVISED
LEARNING
TO
REINFORCEMENT
LEARNING
.
18
2.2.4
DEEP
REINFORCEMENT
LEARNING
BREAKTHROUGHS
FOR
VIDEO
AND
BOARD
GAMES
.
19
2.2.5
TACKLING
GAMES
WITH
IMPERFECT
INFORMATION
.
23
2.3
TOWARDS
RESEARCH
ON
TRANSPARENCY
&
INTERPRETABILITY
.
24
2.3.1
A
SHIFT
OF
PARADIGM:
FROM
OPTIMIZING
TO
UNDERSTANDING
.
25
2.3.2
INSPIRATIONS
FROM
NEUROSCIENCE
RESEARCH
.
27
3
METHODS
AND
TERMINOLOGY
.
31
3.1
LEARNED
REPRESENTATIONS
.
31
3.1.1
INVESTIGATING
LEARNED
REPRESENTATIONS
.
33
VIII
CONTENTS
3.1.1.1
STATISTICAL
MEASURES
.
34
3.1.1.2
TRANSFORMATIONS
&
EMBEDDINGS
.
35
3.1.2
VISUALIZING
STRUCTURE
OF
THE
LEARNED
REPRESENTATIONS
.
46
3.1.2.1
MAGNITUDE
OF
NEURON
ACTIVATIONS
.
47
3.1.2.2
SELECTIVITY
OF
NEURON
ACTIVATIONS
.
47
3.1.2.3
ABLATIONS
OF
INDIVIDUAL
NEURONS
.
48
3.1.2.4
GINI
IMPORTANCE
.
51
3.2
DELIMITATION
OF
THE
OBJECT
OF
INVESTIGATION
.
51
3.2.1
TRANSPARENCY
FOR
COMPUTER
VISION
MODELS
.
52
3.2.2
TRANSPARENCY
FOR
MOTION
CONTROL
MODELS
.
53
3.2.3
TRANSFER
TO
AN
INDUSTRIAL
APPLICATION
SCENARIO
.
55
4
RELATED
WORK
.
57
4.1
RELATIONSHIP
BETWEEN
A
NETWORK
'
S
INPUT
FEATURES
AND
ITS
OUTPUT
.
59
4.2
VISUALIZATION
OF
NETWORK
PROPERTIES
AND
GRAPHICAL
USER
INTERFACES
.
61
4.3
INVESTIGATING
THE
IMPORTANCE
OF
INDIVIDUAL
NETWORK
COMPONENTS
.
64
4.3.1
MISCELLANEOUS
CONTRIBUTIONS
.
65
4.3.2
ABLATIONS
STUDIES
.
67
4.3.3
REVERSE
ENGINEERING
OF
NEURAL
NETWORKS
.
69
5
RESEARCH
STUDIES
.
71
5.1
INVESTIGATING
LEARNED
REPRESENTATIONS
IN
COMPUTER
VISION
.
75
5.1.1
RESEARCH
STUDY
1:
CHARACTERIZING
SINGLE
NEURONS
IN
A
SHALLOW
MLP
.
75
5.1.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
75
5.1.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
77
5.1.1.3
RESULTS
.
78
5.1.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
88
5.1.2
RESEARCH
STUDY
2:
NETWORK
ABLATIONS
IN
A
DEEP
NEURAL
NETWORK
.
91
5.1.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
91
5.1.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
92
5.1.2.3
RESULTS
.
93
5.1.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
98
CONTENTS
IX
5.1.3
RESEARCH
STUDY
3:
FUNCTIONAL
NEURON
POPULATIONS
IN
CUSTOM-MADE
CNNS
.
100
5.1.3.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
100
5.1.3.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
101
5.1.3.3
RESULTS
.
105
5.1.3.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
113
5.2
INVESTIGATING
LEARNED
REPRESENTATIONS
IN
MOTOR
CONTROL
.
114
5.2.1
RESEARCH
STUDY
4:
INFLUENCE
OF
NETWORK
ABLATIONS
ON
ACTIVATION
PATTERNS
.
115
5.2.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
116
5.2.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
117
5.2.1.3
RESULTS
.
119
5.2.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
128
5.2.2
RESEARCH
STUDY
5:
RELATION
BETWEEN
NEURAL
ACTIVATIONS
AND
AGENT
BEHAVIOR
.
129
5.2.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
130
5.2.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
131
5.2.2.3
RESULTS
.
135
5.2.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
147
6
TRANSFER
STUDIES
.
149
6.1
TRANSFER
STUDY
1:
NETWORK
ABLATIONS
FOR
DEEP
DRAWING
.
150
6.1.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
151
6.1.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
151
6.1.3
RESULTS
.
159
6.1.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
164
6.2
TRANSFER
STUDY
2:
ATTENTION
MECHANISMS
FOR
DEEP
DRAWING
.
166
6.2.1
KEY
CONTRIBUTIONS
OF
THE
STUDY
.
167
6.2.2
METHODS
AND
EXPERIMENTAL
DESIGN
.
167
6.2.3
RESULTS
.
173
6.2.4
SUMMARY
AND
CONTRIBUTION
OF
THE
RESULTS
TO
THE
RESEARCH
QUESTIONS
.
181
X
CONTENTS
7
CRITICAL
REFLECTION
&
OUTLOOK
.
183
7,1
REFLECTION
OF
RESULTS
&
CONTRIBUTION
TO
RESEARCH
QUESTIONS
.
183
7.1.1
RESEARCH
QUESTION
1
.
184
7.1.2
RESEARCH
QUESTION
2
.
185
7.1.3
RESEARCH
QUESTION
3
.
187
7.1.4
RESEARCH
QUESTION
4
.
189
7.2
FUTURE
RESEARCH
DIRECTIONS
.
191
8
SUMMARY
.
195
REFERENCES
.
197 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Meyes, Richard 1989- |
author_GND | (DE-588)1276449453 |
author_facet | Meyes, Richard 1989- |
author_role | aut |
author_sort | Meyes, Richard 1989- |
author_variant | r m rm |
building | Verbundindex |
bvnumber | BV049452636 |
classification_rvk | ST 301 |
ctrlnum | (OCoLC)1349705092 (DE-599)DNB1271554054 |
discipline | Informatik |
discipline_str_mv | Informatik |
edition | 1st edition 2023 |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a22000008c 4500</leader><controlfield tag="001">BV049452636</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240716</controlfield><controlfield tag="007">t|</controlfield><controlfield tag="008">231205s2022 gw a||| |||| 00||| eng d</controlfield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">1271554054</subfield><subfield code="2">DE-101</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783658400033</subfield><subfield code="c">Festeinband : circa EUR 80.24 (DE) (freier Preis), circa EUR 82.49 (AT) (freier Preis), circa CHF 88.50 (freier Preis), circa EUR 74.99</subfield><subfield code="9">978-3-658-40003-3</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">365840003X</subfield><subfield code="9">3-658-40003-X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1349705092</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DNB1271554054</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">gw</subfield><subfield code="c">XA-DE-HE</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-83</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="8">1\p</subfield><subfield code="a">004</subfield><subfield code="2">23sdnb</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Meyes, Richard</subfield><subfield code="d">1989-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1276449453</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Transparency and interpretability for learned representations of artificial neural networks</subfield><subfield code="c">Richard Meyes</subfield></datafield><datafield tag="263" ind1=" " ind2=" "><subfield code="a">202304</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Wiesbaden</subfield><subfield code="b">Springer Vieweg</subfield><subfield code="c">[2022]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">21 cm x 14.8 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Research</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Erklärbare künstliche Intelligenz</subfield><subfield code="0">(DE-588)1263068472</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Transparency</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Interpretability</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Explainability</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Learned Representation</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">XAI</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Explainable AI</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Artificial Neural Networks</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Deep Learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Digital Transformation</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Neuroscience</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Erklärbare künstliche Intelligenz</subfield><subfield code="0">(DE-588)1263068472</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="710" ind1="2" ind2=" "><subfield code="a">Springer Fachmedien Wiesbaden</subfield><subfield code="0">(DE-588)1043386068</subfield><subfield code="4">pbl</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">9783658400040</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">X:MVB</subfield><subfield code="q">text/html</subfield><subfield code="u">http://deposit.dnb.de/cgi-bin/dokserv?id=810da808ab594ca7819a0d5c28ee39a4&prov=M&dok_var=1&dok_ext=htm</subfield><subfield code="3">Inhaltstext</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">X:MVB</subfield><subfield code="u">http://www.springer.com/</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">DNB Datenaustausch</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034798497&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">vlb</subfield><subfield code="d">20221029</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#vlb</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-034798497</subfield></datafield></record></collection> |
id | DE-604.BV049452636 |
illustrated | Illustrated |
index_date | 2024-07-03T23:13:17Z |
indexdate | 2024-12-13T13:01:57Z |
institution | BVB |
institution_GND | (DE-588)1043386068 |
isbn | 9783658400033 365840003X |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-034798497 |
oclc_num | 1349705092 |
open_access_boolean | |
owner | DE-83 |
owner_facet | DE-83 |
physical | Illustrationen, Diagramme 21 cm x 14.8 cm |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Springer Vieweg |
record_format | marc |
series2 | Research |
spelling | Meyes, Richard 1989- Verfasser (DE-588)1276449453 aut Transparency and interpretability for learned representations of artificial neural networks Richard Meyes 202304 Wiesbaden Springer Vieweg [2022] © 2022 Illustrationen, Diagramme 21 cm x 14.8 cm txt rdacontent n rdamedia nc rdacarrier Research Erklärbare künstliche Intelligenz (DE-588)1263068472 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Transparency Interpretability Explainability Learned Representation XAI Explainable AI Artificial Neural Networks Deep Learning Digital Transformation Neuroscience Neuronales Netz (DE-588)4226127-2 s Erklärbare künstliche Intelligenz (DE-588)1263068472 s DE-604 Springer Fachmedien Wiesbaden (DE-588)1043386068 pbl Erscheint auch als Online-Ausgabe 9783658400040 X:MVB text/html http://deposit.dnb.de/cgi-bin/dokserv?id=810da808ab594ca7819a0d5c28ee39a4&prov=M&dok_var=1&dok_ext=htm Inhaltstext X:MVB http://www.springer.com/ DNB Datenaustausch application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034798497&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p vlb 20221029 DE-101 https://d-nb.info/provenance/plan#vlb |
spellingShingle | Meyes, Richard 1989- Transparency and interpretability for learned representations of artificial neural networks Erklärbare künstliche Intelligenz (DE-588)1263068472 gnd Neuronales Netz (DE-588)4226127-2 gnd |
subject_GND | (DE-588)1263068472 (DE-588)4226127-2 |
title | Transparency and interpretability for learned representations of artificial neural networks |
title_auth | Transparency and interpretability for learned representations of artificial neural networks |
title_exact_search | Transparency and interpretability for learned representations of artificial neural networks |
title_exact_search_txtP | Transparency and Interpretability for Learned Representations of Artificial Neural Networks |
title_full | Transparency and interpretability for learned representations of artificial neural networks Richard Meyes |
title_fullStr | Transparency and interpretability for learned representations of artificial neural networks Richard Meyes |
title_full_unstemmed | Transparency and interpretability for learned representations of artificial neural networks Richard Meyes |
title_short | Transparency and interpretability for learned representations of artificial neural networks |
title_sort | transparency and interpretability for learned representations of artificial neural networks |
topic | Erklärbare künstliche Intelligenz (DE-588)1263068472 gnd Neuronales Netz (DE-588)4226127-2 gnd |
topic_facet | Erklärbare künstliche Intelligenz Neuronales Netz |
url | http://deposit.dnb.de/cgi-bin/dokserv?id=810da808ab594ca7819a0d5c28ee39a4&prov=M&dok_var=1&dok_ext=htm http://www.springer.com/ http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034798497&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT meyesrichard transparencyandinterpretabilityforlearnedrepresentationsofartificialneuralnetworks AT springerfachmedienwiesbaden transparencyandinterpretabilityforlearnedrepresentationsofartificialneuralnetworks |