Neural networks and learning machines:
Gespeichert in:
Vorheriger Titel: | Haykin, Simon S. Neural networks |
---|---|
1. Verfasser: | |
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Upper Saddle River [u.a.]
Pearson
2009
|
Ausgabe: | 3. ed. |
Schriftenreihe: | Pearson International Edition
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | 934 S. Ill., graph. Darst. |
ISBN: | 0131293761 9780131293762 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV035380361 | ||
003 | DE-604 | ||
005 | 20200924 | ||
007 | t | ||
008 | 090319s2009 ad|| |||| 00||| eng d | ||
020 | |a 0131293761 |9 0-13-129376-1 | ||
020 | |a 9780131293762 |9 978-0-13-129376-2 | ||
035 | |a (OCoLC)318031115 | ||
035 | |a (DE-599)BVBBV035380361 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
049 | |a DE-945 |a DE-824 |a DE-91G |a DE-706 |a DE-703 |a DE-92 |a DE-473 |a DE-83 |a DE-11 |a DE-525 |a DE-2070s |a DE-19 |a DE-573 |a DE-384 |a DE-634 | ||
050 | 0 | |a QA76.87 | |
082 | 0 | |a 006.32 |2 22 | |
084 | |a ST 200 |0 (DE-625)143611: |2 rvk | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
084 | |a DAT 717f |2 stub | ||
100 | 1 | |a Haykin, Simon S. |d 1931- |e Verfasser |0 (DE-588)128698497 |4 aut | |
245 | 1 | 0 | |a Neural networks and learning machines |c Simon Haykin |
250 | |a 3. ed. | ||
264 | 1 | |a Upper Saddle River [u.a.] |b Pearson |c 2009 | |
300 | |a 934 S. |b Ill., graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Pearson International Edition | |
650 | 4 | |a Neural networks (Computer science) | |
650 | 4 | |a Neural networks (Computer science) |v Problems, exercises, etc | |
650 | 0 | 7 | |a Lernendes System |0 (DE-588)4120666-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
655 | 7 | |8 1\p |0 (DE-588)4151278-9 |a Einführung |2 gnd-content | |
689 | 0 | 0 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 1 | |a Lernendes System |0 (DE-588)4120666-6 |D s |
689 | 0 | 2 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |5 DE-604 | |
780 | 0 | 0 | |i 2. Auflage |a Haykin, Simon S. |t Neural networks |
856 | 4 | 2 | |m Digitalisierung UB Bayreuth |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=017184625&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-017184625 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804138709658894336 |
---|---|
adam_text | Contents
Preface
10
Introduction
1
1.
What is
a
Neural
Network ?
31
2.
The Human Brain
36
3.
Models of a Neuron
40
4.
Neural Networks Viewed As Directed Graphs
45
5.
Feedback
48
6.
Network Architectures
51
7.
Knowledge Representation
54
8.
Learning Processes
64
9.
Learning Tasks
68
10.
Concluding Remarks
75
Notes and References
76
Chapter
1
Rosenblatt s Perceptron
77
1.1
Introduction
77
1.2.
Perceptron
78
1.3.
The Perceptron Convergence Theorem
80
1.4.
Relation Between the Perceptron and
Bayes
Classifier for a Gaussian Environment
85
1.5.
Computer Experiment: Pattern Classification
90
1.6.
The Batch Perceptron Algorithm
92
1.7.
Summary and Discussion
95
Notes and References
96
Problems
96
Chapter
2
Model Building through Regression
98
2.1
Introduction
98
2.2
Linear Regression Model: Preliminary Considerations
99
2.3
Maximum a Posteriori Estimation of the Parameter Vector
101
2.4
Relationship Between Regularized Least-Squares Estimation
and MAP Estimation
106
2.5
Computer Experiment: Pattern Classification
107
2.6
The Minimum-Description-Length Principle
109
2.7
Finite Sample-Size Considerations
112
2.8
The Instrumental-Variables Method
116
2.9
Summary and Discussion
118
Notes and References
119
Problems
119
6 Contents
Chapter
3
The Least-Mean-Square Algorithm
121
3.1
Introduction
121
3.2
Filtering Structure of the LMS Algorithm
122
3.3
Unconstrained Optimization: a Review
124
3.4
The Wiener Filter
130
3.5
The Least-Mean-Square Algorithm
132
3.6
Markov Model Portraying the Deviation of the LMS Algorithm
from the Wiener Filter
134
3.7
The
Langevin
Equation: Characterization of Brownian Motion
136
3.8
Kushner s Direct-Averaging Method
137
3.9
Statistical LMS Learning Theory for Small Learning-Rate Parameter
138
3.10
Computer Experiment I: Linear Prediction
140
3.11
Computer Experiment II: Pattern Classification
142
3.12
Virtues and Limitations of the LMS Algorithm
143
3.13
Learning-Rate Annealing Schedules
145
3.14
Summary and Discussion
147
Notes and References
148
Problems
149
Chapter
4
Multilayer Perceptrons
152
4.1
Introduction
153
4.2
Some Preliminaries
154
4.3
Batch Learning and On-Line Learning
156
4.4
The Back-Propagation Algorithm
159
4.5
XOR Problem
171
4.6
Heuristics for Making the Back-Propagation Algorithm Perform Better
174
4.7
Computer Experiment: Pattern Classification
180
4.8
Back Propagation and Differentiation
183
4.9
The Hessian and Its Role in On-Line Learning
185
4.10
Optimal Annealing and Adaptive Control of the Learning Rate
187
4.11
Generalization
194
4.12
Approximations of Functions
196
4.13
Cross-Validation
201
4.14
Complexity Regularization and Network Pruning
205
4.15
Virtues and Limitations of Back-Propagation Learning
210
4.16
Supervised Learning Viewed as an Optimization Problem
216
4.17
Convolutional Networks
231
4.18
Nonlinear Filtering
233
4.19
Small-Scale Versus Large-Scale Learning Problems
239
4.20
Summary and Discussion
247
Notes and References
249
Problems
251
Chapter
5
Kernel Methods and Radial-Basis Function Networks
258
5.1
Introduction
258
5.2
Cover s Theorem on the Separability of Patterns
259
5.3
The Interpolation Problem
264
5.4
Radial-Basis-Function Networks
267
5.5
K-Means Clustering
270
5.6
Recursive Least-Squares Estimation of the Weight Vector
273
5.7
Hybrid Learning Procedure for RBF Networks
277
5.8
Computer Experiment: Pattern Classification
278
5.9
Interpretations of the Gaussian Hidden Units
280
Contents
5.10 Kernel Regression
and Its
Relation
to RBF
Networks 283
5.11
Summary and Discussion
287
Notes
and References
289
Problems 291
Chapter
6 Support
Vector Machines
296
6.1
Introduction
296
6.2 Optimal Hyperplane
for Linearly Separable Patterns
297
6.3
Optimal
Hyperplane
for Nonseparable Patterns
304
6.4
The Support Vector Machine Viewed as a Kernel Machine
309
6.5
Design of Support Vector Machines
312
6.6
XOR Problem
314
6.7
Computer Experiment: Pattern Classification
317
6.8
Regression: Robustness Considerations
317
6.9
Optimal Solution of the Linear Regression Problem
321
6.10
The
Représenter
Theorem and Related Issues
324
6.11
Summary and Discussion
330
Notes and References
332
Problems
335
Chapter
7
Regularization Theory
341
7.1
Introduction
341
7.2
Hadamard s Conditions for Well-Posedness
342
7.3
Tikhonov s Regularization Theory
343
7.4
Regularization Networks
354
7.5
Generalized Radial-Basis-Function Networks
355
7.6
The Regularized Least-Squares Estimator: Revisited
359
7.7
Additional Notes of Interest on Regularization
363
7.8
Estimation of the Regularization Parameter
364
7.9
Semisupervised Learning
370
7.10
Manifold Regularization: Preliminary Considerations
371
7.11
Differentiable Manifolds
373
7.12
Generalized Regularization Theory
376
7.13
Spectral Graph Theory
378
7.14
Generalized
Représenter
Theorem
380
7.15
Laplacian Regularized Least-Squares Algorithm
382
7.16
Experiments on Pattern Classification Using Semisupervised Learning
384
7.17
Summary and Discussion
387
Notes and References
389
Problems
391
Chapter
8
Principal-Components Analysis
395
8.1
Introduction
395
8.2
Principles of Self-Organization
396
8.3
Self-Organized Feature Analysis
400
8.4
Principal-Components Analysis: Perturbation Theory
401
8.5
Hebbian-Based Maximum
Eigenfilter 411
8.6
Hebbian-Based Principal-Components Analysis
420
8.7
Case Study: Image Coding
426
8.8
Kernel Principal-Components Analysis
429
8.9
Basic Issues Involved in the Coding of Natural Images
434
8.10
Kernel Hebbian Algorithm
435
8.11
Summary and Discussion
440
Notes and References
443
Problems
446
8 Contents
Chapter
9
Self-Organizing Maps
453
9.1
Introduction
453
9.2
Two Basic Feature-Mapping Models
454
9.3
Self-Organizing Map
456
9.4
Properties of the Feature Map
465
9.5
Computer Experiments I: Disentangling Lattice Dynamics Using
SOM
473
9.6
Contextual Maps
475
9.7
Hierarchical Vector Quantization
478
9.8
Kernel Self-Organizing Map
482
9.9
Computer Experiment II: Disentangling Lattice Dynamics Using
Kernel
SOM
490
9.10
Relationship Between Kernel
SOM
and Kullback-Leibler Divergence
492
9.11
Summary and Discussion
494
Notes and References
496
Problems
498
Chapter
10
Information-Theoretic Learning Models
503
10.1
Introduction
504
10.2
Entropy
505
10.3
Maximum-Entropy Principle
509
10.4
Mutual Information
512
10.5
Kullback-Leibler Divergence
514
10.6
Copulas
517
10.7
Mutual Information as an Objective Function to be Optimized
521
10.8
Maximum Mutual Information Principle
522
10.9
Infomax and Redundancy Reduction
527
10.10
Spatially Coherent Features
529
10.11
Spatially Incoherent Features
532
10.12
Independent-Components Analysis
536
10.13
Sparse Coding of Natural Images and Comparison with
ICA
Coding
542
10.14
Natural-Gradient Learning for Independent-Components Analysis
544
10.15
Maximum-Likelihood Estimation for Independent-Components Analysis
554
10.16
Maximum-Entropy Learning for Blind Source Separation
557
10.17
Maximization of Negentropy for Independent-Components Analysis
562
10.18
Coherent Independent-Components Analysis
569
10.19
Rate Distortion Theory and Information Bottleneck
577
10.20
Optimal Manifold Representation of Data
581
10.21
Computer Experiment: Pattern Classification
588
10.22
Summary and Discussion
589
Notes and References
592
Problems
600
Chapter
11
Stochastic Methods Rooted in Statistical Mechanics
607
11.1
Introduction
608
11.2
Statistical Mechanics
608
11.3
Markov Chains
610
11.4
Metropolis Algorithm
619
11.5
Simulated Annealing
622
11.6
Gibbs Sampling
624
11.7
Boltzmann Machine
626
11.8
Logistic Belief Nets
632
11.9
Deep Belief Nets
634
11.10
Deterministic Annealing
638
Contents
11.11
Analogy of Deterministic Annealing with Expectation-Maximization
Algorithm
644
11.12
Summary and Discussion
645
Notes and References
647
Problems
649
Chapter
12
Dynamic Programming
655
12.1
Introduction
655
12.2
Markov Decision Process
657
12.3
Bellman s Optimality Criterion
659
12.4
Policy Iteration
663
12.5
Value Iteration
665
12.6
Approximate Dynamic Programming: Direct Methods
670
12.7
Temporal-Difference Learning
671
12.8
Q-Learning
676
12.9
Approximate Dynamic Programming: Indirect Methods
680
12.10
Least-Squares Policy Evaluation
683
12.11
Approximate Policy Iteration
688
12.12
Summary and Discussion
691
Notes and References
693
Problems
696
Chapter
13
Neurodynamics
700
13.1
Introduction
700
13.2
Dynamic Systems
702
13.3
Stability of Equilibrium States
706
13.4
Attractors
712
13.5
Neurodynamic Models
714
13.6
Manipulation of Attractors as a Recurrent
Network Paradigm
717
13.7
Hopfield Model
718
13.8
The Cohen-Grossberg Theorem
731
13.9
Brain-State-In-A-Box Model
733
13.10
Strange Attractors and Chaos
739
13.11
Dynamic Reconstruction of a Chaotic Process
744
13.12
Summary and Discussion
750
Notes and References
752
Problems
755
Chapter
14
Bayseian Filtering for State Estimation of Dynamic Systems
759
14.1
Introduction
759
14.2
State-Space Models
760
14.3
Kalman
Filters
764
14.4
The Divergence-Phenomenon and Square-Root Filtering
772
14.5
The Extended
Kalman
Filter
778
14.6
The Bayesian Filter
783
14.7
Cubature
Kalman
Filter: Building on the
Kalman
Filter
787
14.8
Particle Filters
793
14.9
Computer Experiment: Comparative Evaluation of Extended
Kalman
and Particle
Filters
803
14.10
Kalman
Filtering in Modeling of Brain Functions
805
14.11
Summary and Discussion
808
Notes and References
810
Problems
812
10 Contents
Chapter
15
Dynamically Driven Recurrent Networks
818
15.1
Introduction
818
15.2
Recurrent Network Architectures
819
15.3
Universal Approximation Theorem
825
15.4
Controllability and Observability
827
15.5
Computational Power of Recurrent Networks
832
15.6
Learning Algorithms
834
15.7
Back Propagation Through Time
836
15.8
Real-Time
Recurrent Learning
840
15.9
Vanishing Gradients in Recurrent Networks
846
15.10
Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential
State Estimators
850
15.11
Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor
857
15.12
Adaptivity Considerations
859
15.13
Case Study: Model Reference Applied to Neurocontrol
861
15.14
Summary and Discussion
863
Notes and References
867
Problems
870
Bibliography
875
Index
916
|
any_adam_object | 1 |
author | Haykin, Simon S. 1931- |
author_GND | (DE-588)128698497 |
author_facet | Haykin, Simon S. 1931- |
author_role | aut |
author_sort | Haykin, Simon S. 1931- |
author_variant | s s h ss ssh |
building | Verbundindex |
bvnumber | BV035380361 |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.87 |
callnumber-search | QA76.87 |
callnumber-sort | QA 276.87 |
callnumber-subject | QA - Mathematics |
classification_rvk | ST 200 ST 300 ST 301 |
classification_tum | DAT 717f |
ctrlnum | (OCoLC)318031115 (DE-599)BVBBV035380361 |
dewey-full | 006.32 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.32 |
dewey-search | 006.32 |
dewey-sort | 16.32 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
edition | 3. ed. |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02178nam a2200517 c 4500</leader><controlfield tag="001">BV035380361</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20200924 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">090319s2009 ad|| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0131293761</subfield><subfield code="9">0-13-129376-1</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780131293762</subfield><subfield code="9">978-0-13-129376-2</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)318031115</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV035380361</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-945</subfield><subfield code="a">DE-824</subfield><subfield code="a">DE-91G</subfield><subfield code="a">DE-706</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-92</subfield><subfield code="a">DE-473</subfield><subfield code="a">DE-83</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-525</subfield><subfield code="a">DE-2070s</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-573</subfield><subfield code="a">DE-384</subfield><subfield code="a">DE-634</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.87</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.32</subfield><subfield code="2">22</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 200</subfield><subfield code="0">(DE-625)143611:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 717f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Haykin, Simon S.</subfield><subfield code="d">1931-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)128698497</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Neural networks and learning machines</subfield><subfield code="c">Simon Haykin</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">3. ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Upper Saddle River [u.a.]</subfield><subfield code="b">Pearson</subfield><subfield code="c">2009</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">934 S.</subfield><subfield code="b">Ill., graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Pearson International Edition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural networks (Computer science)</subfield><subfield code="v">Problems, exercises, etc</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="8">1\p</subfield><subfield code="0">(DE-588)4151278-9</subfield><subfield code="a">Einführung</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Lernendes System</subfield><subfield code="0">(DE-588)4120666-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="780" ind1="0" ind2="0"><subfield code="i">2. Auflage</subfield><subfield code="a">Haykin, Simon S.</subfield><subfield code="t">Neural networks</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Bayreuth</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=017184625&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-017184625</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
genre | 1\p (DE-588)4151278-9 Einführung gnd-content |
genre_facet | Einführung |
id | DE-604.BV035380361 |
illustrated | Illustrated |
indexdate | 2024-07-09T21:32:30Z |
institution | BVB |
isbn | 0131293761 9780131293762 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-017184625 |
oclc_num | 318031115 |
open_access_boolean | |
owner | DE-945 DE-824 DE-91G DE-BY-TUM DE-706 DE-703 DE-92 DE-473 DE-BY-UBG DE-83 DE-11 DE-525 DE-2070s DE-19 DE-BY-UBM DE-573 DE-384 DE-634 |
owner_facet | DE-945 DE-824 DE-91G DE-BY-TUM DE-706 DE-703 DE-92 DE-473 DE-BY-UBG DE-83 DE-11 DE-525 DE-2070s DE-19 DE-BY-UBM DE-573 DE-384 DE-634 |
physical | 934 S. Ill., graph. Darst. |
publishDate | 2009 |
publishDateSearch | 2009 |
publishDateSort | 2009 |
publisher | Pearson |
record_format | marc |
series2 | Pearson International Edition |
spelling | Haykin, Simon S. 1931- Verfasser (DE-588)128698497 aut Neural networks and learning machines Simon Haykin 3. ed. Upper Saddle River [u.a.] Pearson 2009 934 S. Ill., graph. Darst. txt rdacontent n rdamedia nc rdacarrier Pearson International Edition Neural networks (Computer science) Neural networks (Computer science) Problems, exercises, etc Lernendes System (DE-588)4120666-6 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf 1\p (DE-588)4151278-9 Einführung gnd-content Neuronales Netz (DE-588)4226127-2 s Lernendes System (DE-588)4120666-6 s Maschinelles Lernen (DE-588)4193754-5 s DE-604 2. Auflage Haykin, Simon S. Neural networks Digitalisierung UB Bayreuth application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=017184625&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Haykin, Simon S. 1931- Neural networks and learning machines Neural networks (Computer science) Neural networks (Computer science) Problems, exercises, etc Lernendes System (DE-588)4120666-6 gnd Neuronales Netz (DE-588)4226127-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
subject_GND | (DE-588)4120666-6 (DE-588)4226127-2 (DE-588)4193754-5 (DE-588)4151278-9 |
title | Neural networks and learning machines |
title_auth | Neural networks and learning machines |
title_exact_search | Neural networks and learning machines |
title_full | Neural networks and learning machines Simon Haykin |
title_fullStr | Neural networks and learning machines Simon Haykin |
title_full_unstemmed | Neural networks and learning machines Simon Haykin |
title_old | Haykin, Simon S. Neural networks |
title_short | Neural networks and learning machines |
title_sort | neural networks and learning machines |
topic | Neural networks (Computer science) Neural networks (Computer science) Problems, exercises, etc Lernendes System (DE-588)4120666-6 gnd Neuronales Netz (DE-588)4226127-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
topic_facet | Neural networks (Computer science) Neural networks (Computer science) Problems, exercises, etc Lernendes System Neuronales Netz Maschinelles Lernen Einführung |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=017184625&sequence=000002&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT haykinsimons neuralnetworksandlearningmachines |