Learning from data: concepts, theory, and methods
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Hoboken, NJ
Wiley
[2007]
|
Ausgabe: | Second edition |
Schlagworte: | |
Online-Zugang: | Table of contents only Inhaltsverzeichnis Klappentext |
Beschreibung: | XVIII, 538 Seiten Diagramme |
ISBN: | 9780471681823 0471681822 |
Internformat
MARC
LEADER | 00000nam a2200000zc 4500 | ||
---|---|---|---|
001 | BV022473505 | ||
003 | DE-604 | ||
005 | 20240221 | ||
007 | t | ||
008 | 070620s2007 xxu|||| |||| 00||| eng d | ||
010 | |a 2006038736 | ||
020 | |a 9780471681823 |9 978-0-471-68182-3 | ||
020 | |a 0471681822 |c cloth |9 0-471-68182-2 | ||
035 | |a (OCoLC)76481553 | ||
035 | |a (DE-599)BVBBV022473505 | ||
040 | |a DE-604 |b ger |e aacr | ||
041 | 0 | |a eng | |
044 | |a xxu |c US | ||
049 | |a DE-355 |a DE-473 |a DE-703 |a DE-706 |a DE-29T |a DE-945 | ||
050 | 0 | |a TK5102.9 | |
082 | 0 | |a 006.3/1 | |
084 | |a ST 301 |0 (DE-625)143651: |2 rvk | ||
084 | |a SK 850 |0 (DE-625)143263: |2 rvk | ||
084 | |a CM 4000 |0 (DE-625)18951: |2 rvk | ||
100 | 1 | |a Cherkassky, Vladimir |e Verfasser |0 (DE-588)1102410721 |4 aut | |
245 | 1 | 0 | |a Learning from data |b concepts, theory, and methods |c Vladimir Cherkassky ; Filip Mulier |
250 | |a Second edition | ||
264 | 1 | |a Hoboken, NJ |b Wiley |c [2007] | |
264 | 4 | |c © 2007 | |
300 | |a XVIII, 538 Seiten |b Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
650 | 0 | 7 | |a Methode |0 (DE-588)4038971-6 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenauswertung |0 (DE-588)4131193-0 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Datenauswertung |0 (DE-588)4131193-0 |D s |
689 | 0 | 1 | |a Methode |0 (DE-588)4038971-6 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 1 | |5 DE-604 | |
700 | 1 | |a Mulier, Filip |e Verfasser |4 aut | |
856 | 4 | |u http://www.loc.gov/catdir/toc/ecip075/2006038736.html |3 Table of contents only | |
856 | 4 | 2 | |m Digitalisierung UB Regensburg |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
856 | 4 | 2 | |m Digitalisierung UB Regensburg |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |3 Klappentext |
999 | |a oai:aleph.bib-bvb.de:BVB01-015680951 |
Datensatz im Suchindex
_version_ | 1804136561747427328 |
---|---|
adam_text | CONTENTS
PREFACE
xi
NOTATION xvii
1
Introduction
1
1.1
Learning and Statistical Estimation,
2
1.2
Statistical Dependency and Causality,
7
1.3
Characterization of Variables,
10
1.4
Characterization of Uncertainty,
11
1.5
Predictive Learning versus Other Data Analytical Methodologies,
14
2
Problem Statement, Classical Approaches, and Adaptive Learning
19
2.1
Formulation of the Learning Problem,
21
2.1.1
Objective of Learning,
24
2.1.2
Common Learning Tasks,
25
2.1.3
Scope of the Learning Problem Formulation,
29
2.2
Classical Approaches,
30
2.2.1
Density Estimation,
30
2.2.2
Classification,
32
2.2.3
Regression,
34
2.2.4
Solving Problems with Finite Data,
34
2.2.5
Nonparametric Methods,
36
2.2.6
Stochastic Approximation,
39
v¡
CONTENTS
2.3
Adaptive Learning: Concepts and Inductive Principles,
40
2.3.1
Philosophy, Major Concepts, and Issues,
40
2.3.2
A Priori Knowledge and Model Complexity,
43
2.3.3
Inductive Principles,
45
2.3.4
Alternative Learning Formulations,
55
2.4
Summary,
58
3
Regularization Framework
61
3.1
Curse and Complexity of Dimensionality,
62
3.2
Function Approximation and Characterization of Complexity,
66
3.3
Penalization,
70
3.3.1
Parametric Penalties,
72
3.3.2
Nonparametric Penalties,
73
3.4
Model Selection (Complexity Control),
73
3.4.1
Analytical Model Selection Criteria,
75
3.4.2
Model Selection via Resampling,
78
3.4.3
Bias-Variance Tradeoff,
80
3.4.4
Example of Model Selection,
85
3.4.5
Function Approximation versus Predictive Learning,
88
3.5
Summary,
96
4
Statistical Learning Theory
99
4.1
Conditions for Consistency and Convergence of ERM,
101
4.2
Growth Function and VC Dimension,
107
4.2.1
VC Dimension for Classification and Regression Problems,
110
4.2.2
Examples of Calculating VC Dimension,
111
4.3
Bounds on the Generalization,
115
4.3.1
Classification,
116
4.3.2
Regression,
118
4.3.3
Generalization Bounds and Sampling Theorem,
120
4.4
Structural Risk Minimization,
122
4.4.1
Dictionary Representation,
124
4.4.2
Feature Selection,
125
4.4.3
Penalization Formulation,
126
4.4.4
Input Preprocessing,
126
4.4.5
Initial Conditions for Training Algorithm,
127
4.5
Comparisons of Model Selection for Regression,
128
4.5.1
Model Selection for Linear Estimators,
134
4.5.2
Model Selection for ¿-Nearest-Neighbor Regression,
137
4.5.3
Model Selection for Linear Subset Regression,
140
4.5.4
Discussion,
141
4.6
Measuring the VC Dimension,
143
4.7
VC Dimension, Occam s Razor, and Popper s Falsifiability,
146
4.8
Summary and Discussion,
149
CONTENTS
vii
5
Nonlinear Optimization Strategies
151
5.1
Stochastic Approximation Methods,
154
5.1.1
Linear Parameter Estimation,
155
5.1.2
Backpropagation Training of
MLP
Networks,
156
5.2
Iterative Methods,
161
5.2.1
EM Methods for Density Estimation,
161
5.2.2
Generalized Inverse Training of
MLP
Networks,
164
5.3
Greedy Optimization,
169
5.3.1
Neural Network Construction Algorithms,
169
5.3.2
Classification and Regression Trees,
170
5.4
Feature Selection, Optimization, and Statistical Learning Theory,
173
5.5
Summary,
175
6
Methods for Data Reduction and Dimensionality Reduction
177
6.1
Vector Quantization and Clustering,
183
6.1.1
Optimal Source Coding in Vector Quantization,
184
6.1.2
Generalized Lloyd Algorithm,
187
6.1.3
Clustering,
191
6.1.4
EM Algorithm for VQ and Clustering,
192
6.1.5
Fuzzy Clustering,
195
6.2
Dimensionality Reduction: Statistical Methods,
201
6.2.1
Linear Principal Components,
202
6.2.2
Principal Curves and Surfaces,
205
6.2.3
Multidimensional Scaling,
209
6.3
Dimensionality Reduction: Neural Network Methods,
214
6.3.1
Discrete Principal Curves and Self-Organizing
Map Algorithm,
215
6.3.2
Statistical Interpretation of the
SOM
Method,
218
6.3.3
Flow-Through Version of the
SOM
and
Learning Rate Schedules,
222
6.3.4
SOM
Applications and Modifications,
224
6.3.5
Self-Supervised
MLP, 230
6.4
Methods for Multivariate Data Analysis,
232
6.4.1
Factor Analysis,
233
6.4.2
Independent Component Analysis,
242
6.5
Summary,
247
7
Methods for Regression
249
7.1
Taxonomy: Dictionary versus Kernel Representation,
252
7.2
Linear Estimators,
256
7.2.1
Estimation of Linear Models and Equivalence
of Representations,
258
7.2.2
Analytic Form of Cross-Validation,
262
viii CONTENTS
7.2.3
Estimating Complexity of Penalized Linear Models,
263
7.2.4
Nonadaptive Methods,
269
7.3
Adaptive Dictionary Methods,
277
7.3.1
Additive Methods and Projection
Pursuit Regression,
279
7.3.2
Multilayer Perceptrons and Backpropagation,
284
7.3.3
Multivariate Adaptive Regression Splines,
293
7.3.4
Orthogonal Basis Functions and Wavelet
Signal Denoising,
298
7.4
Adaptive Kernel Methods and Local Risk Minimization,
309
7.4.1
Generalized Memory-Based Learning,
313
7.4.2
Constrained Topological Mapping,
314
7.5
Empirical Studies,
319
7.5.1
Predicting Net Asset Value
(NAV)
of Mutual Funds,
320
7.5.2
Comparison of Adaptive Methods for Regression,
326
7.6
Combining Predictive Models,
332
7.7
Summary,
337
8
Classification
340
8.1
Statistical Learning Theory Formulation,
343
8.2
Classical Formulation,
348
8.2.1
Statistical Decision Theory,
348
8.2.2
Fisher s Linear Discriminant Analysis,
362
8.3
Methods for Classification,
366
8.3.1
Regression-Based Methods,
368
8.3.2
Tree-Based Methods,
378
8.3.3
Nearest-Neighbor and Prototype Methods,
382
8.3.4
Empirical Comparisons,
385
8.4
Combining Methods and Boosting,
390
8.4.1
Boosting as an Additive Model,
395
8.4.2
Boosting for Regression Problems,
400
8.5
Summary,
401
9
Support Vector Machines
404
9.1
Motivation for Margin-Based Loss,
408
9.2
Margin-Based Loss, Robustness, and Complexity Control,
414
9.3
Optimal Separating
Hyperplane, 418
9.4
High-Dimensional Mapping and Inner Product Kernels,
426
9.5
Support Vector Machine for Classification,
430
9.6
Support Vector Implementations,
438
9.7
Support Vector Regression,
439
9.8
SVM Model Selection,
445
9.9
Support Vector Machines and
Régularisation,
453
CONTENTS
ix
9.10
Single-Class SVM and Novelty Detection,
460
9.11
Summary and Discussion,
464
10
Noninductive Inference and Alternative Learning Formulations
467
10.1
Sparse High-Dimensional Data,
470
10.2
Transduction,
474
10.3
Inference Through Contradictions,
481
10.4
Multiple-Model Estimation,
486
10.5
Summary,
496
11
Concluding Remarks
499
Appendix A: Review of Nonlinear Optimization
507
Appendix B: Eigenvalues and Singular Value Decomposition
514
References
519
Index
533
Learning from Data provides a unified treatment of the principles
and methods for learning dependencies from data. It establishes a
general conceptual framework in which various learning methods from
statistics, neural networks, and pattern recognition can be applied—showing that
a few fundamental principles underlie most new methods being proposed today in
statistics, engineering, and computer science.
Since the first edition was published, the field of data-driven learning has experi¬
enced rapid growth. This Second Edition covers these developments with a com¬
pletely revised chapter on support vector machines, a new chapter on noninductive
inference and alternative learning formulations, and an in-depth discussion of the VC
theoretical approach as it relates to other paradigms.
Complete with over one hundred illustrations, case studies, examples, and chapter
summaries. Learning from Data accommodates both beginning and advanced graduate
students in engineering, computer science, and statistics. It is also indispensable for
researchers and practitioners in these areas who must understand the principles and
methods for learning dependencies from data.
|
adam_txt |
CONTENTS
PREFACE
xi
NOTATION xvii
1
Introduction
1
1.1
Learning and Statistical Estimation,
2
1.2
Statistical Dependency and Causality,
7
1.3
Characterization of Variables,
10
1.4
Characterization of Uncertainty,
11
1.5
Predictive Learning versus Other Data Analytical Methodologies,
14
2
Problem Statement, Classical Approaches, and Adaptive Learning
19
2.1
Formulation of the Learning Problem,
21
2.1.1
Objective of Learning,
24
2.1.2
Common Learning Tasks,
25
2.1.3
Scope of the Learning Problem Formulation,
29
2.2
Classical Approaches,
30
2.2.1
Density Estimation,
30
2.2.2
Classification,
32
2.2.3
Regression,
34
2.2.4
Solving Problems with Finite Data,
34
2.2.5
Nonparametric Methods,
36
2.2.6
Stochastic Approximation,
39
v¡
CONTENTS
2.3
Adaptive Learning: Concepts and Inductive Principles,
40
2.3.1
Philosophy, Major Concepts, and Issues,
40
2.3.2
A Priori Knowledge and Model Complexity,
43
2.3.3
Inductive Principles,
45
2.3.4
Alternative Learning Formulations,
55
2.4
Summary,
58
3
Regularization Framework
61
3.1
Curse and Complexity of Dimensionality,
62
3.2
Function Approximation and Characterization of Complexity,
66
3.3
Penalization,
70
3.3.1
Parametric Penalties,
72
3.3.2
Nonparametric Penalties,
73
3.4
Model Selection (Complexity Control),
73
3.4.1
Analytical Model Selection Criteria,
75
3.4.2
Model Selection via Resampling,
78
3.4.3
Bias-Variance Tradeoff,
80
3.4.4
Example of Model Selection,
85
3.4.5
Function Approximation versus Predictive Learning,
88
3.5
Summary,
96
4
Statistical Learning Theory
99
4.1
Conditions for Consistency and Convergence of ERM,
101
4.2
Growth Function and VC Dimension,
107
4.2.1
VC Dimension for Classification and Regression Problems,
110
4.2.2
Examples of Calculating VC Dimension,
111
4.3
Bounds on the Generalization,
115
4.3.1
Classification,
116
4.3.2
Regression,
118
4.3.3
Generalization Bounds and Sampling Theorem,
120
4.4
Structural Risk Minimization,
122
4.4.1
Dictionary Representation,
124
4.4.2
Feature Selection,
125
4.4.3
Penalization Formulation,
126
4.4.4
Input Preprocessing,
126
4.4.5
Initial Conditions for Training Algorithm,
127
4.5
Comparisons of Model Selection for Regression,
128
4.5.1
Model Selection for Linear Estimators,
134
4.5.2
Model Selection for ¿-Nearest-Neighbor Regression,
137
4.5.3
Model Selection for Linear Subset Regression,
140
4.5.4
Discussion,
141
4.6
Measuring the VC Dimension,
143
4.7
VC Dimension, Occam's Razor, and Popper's Falsifiability,
146
4.8
Summary and Discussion,
149
CONTENTS
vii
5
Nonlinear Optimization Strategies
151
5.1
Stochastic Approximation Methods,
154
5.1.1
Linear Parameter Estimation,
155
5.1.2
Backpropagation Training of
MLP
Networks,
156
5.2
Iterative Methods,
161
5.2.1
EM Methods for Density Estimation,
161
5.2.2
Generalized Inverse Training of
MLP
Networks,
164
5.3
Greedy Optimization,
169
5.3.1
Neural Network Construction Algorithms,
169
5.3.2
Classification and Regression Trees,
170
5.4
Feature Selection, Optimization, and Statistical Learning Theory,
173
5.5
Summary,
175
6
Methods for Data Reduction and Dimensionality Reduction
177
6.1
Vector Quantization and Clustering,
183
6.1.1
Optimal Source Coding in Vector Quantization,
184
6.1.2
Generalized Lloyd Algorithm,
187
6.1.3
Clustering,
191
6.1.4
EM Algorithm for VQ and Clustering,
192
6.1.5
Fuzzy Clustering,
195
6.2
Dimensionality Reduction: Statistical Methods,
201
6.2.1
Linear Principal Components,
202
6.2.2
Principal Curves and Surfaces,
205
6.2.3
Multidimensional Scaling,
209
6.3
Dimensionality Reduction: Neural Network Methods,
214
6.3.1
Discrete Principal Curves and Self-Organizing
Map Algorithm,
215
6.3.2
Statistical Interpretation of the
SOM
Method,
218
6.3.3
Flow-Through Version of the
SOM
and
Learning Rate Schedules,
222
6.3.4
SOM
Applications and Modifications,
224
6.3.5
Self-Supervised
MLP, 230
6.4
Methods for Multivariate Data Analysis,
232
6.4.1
Factor Analysis,
233
6.4.2
Independent Component Analysis,
242
6.5
Summary,
247
7
Methods for Regression
249
7.1
Taxonomy: Dictionary versus Kernel Representation,
252
7.2
Linear Estimators,
256
7.2.1
Estimation of Linear Models and Equivalence
of Representations,
258
7.2.2
Analytic Form of Cross-Validation,
262
viii CONTENTS
7.2.3
Estimating Complexity of Penalized Linear Models,
263
7.2.4
Nonadaptive Methods,
269
7.3
Adaptive Dictionary Methods,
277
7.3.1
Additive Methods and Projection
Pursuit Regression,
279
7.3.2
Multilayer Perceptrons and Backpropagation,
284
7.3.3
Multivariate Adaptive Regression Splines,
293
7.3.4
Orthogonal Basis Functions and Wavelet
Signal Denoising,
298
7.4
Adaptive Kernel Methods and Local Risk Minimization,
309
7.4.1
Generalized Memory-Based Learning,
313
7.4.2
Constrained Topological Mapping,
314
7.5
Empirical Studies,
319
7.5.1
Predicting Net Asset Value
(NAV)
of Mutual Funds,
320
7.5.2
Comparison of Adaptive Methods for Regression,
326
7.6
Combining Predictive Models,
332
7.7
Summary,
337
8
Classification
340
8.1
Statistical Learning Theory Formulation,
343
8.2
Classical Formulation,
348
8.2.1
Statistical Decision Theory,
348
8.2.2
Fisher's Linear Discriminant Analysis,
362
8.3
Methods for Classification,
366
8.3.1
Regression-Based Methods,
368
8.3.2
Tree-Based Methods,
378
8.3.3
Nearest-Neighbor and Prototype Methods,
382
8.3.4
Empirical Comparisons,
385
8.4
Combining Methods and Boosting,
390
8.4.1
Boosting as an Additive Model,
395
8.4.2
Boosting for Regression Problems,
400
8.5
Summary,
401
9
Support Vector Machines
404
9.1
Motivation for Margin-Based Loss,
408
9.2
Margin-Based Loss, Robustness, and Complexity Control,
414
9.3
Optimal Separating
Hyperplane, 418
9.4
High-Dimensional Mapping and Inner Product Kernels,
426
9.5
Support Vector Machine for Classification,
430
9.6
Support Vector Implementations,
438
9.7
Support Vector Regression,
439
9.8
SVM Model Selection,
445
9.9
Support Vector Machines and
Régularisation,
453
CONTENTS
ix
9.10
Single-Class SVM and Novelty Detection,
460
9.11
Summary and Discussion,
464
10
Noninductive Inference and Alternative Learning Formulations
467
10.1
Sparse High-Dimensional Data,
470
10.2
Transduction,
474
10.3
Inference Through Contradictions,
481
10.4
Multiple-Model Estimation,
486
10.5
Summary,
496
11
Concluding Remarks
499
Appendix A: Review of Nonlinear Optimization
507
Appendix B: Eigenvalues and Singular Value Decomposition
514
References
519
Index
533
Learning from Data provides a unified treatment of the principles
and methods for learning dependencies from data. It establishes a
general conceptual framework in which various learning methods from
statistics, neural networks, and pattern recognition can be applied—showing that
a few fundamental principles underlie most new methods being proposed today in
statistics, engineering, and computer science.
Since the first edition was published, the field of data-driven learning has experi¬
enced rapid growth. This Second Edition covers these developments with a com¬
pletely revised chapter on support vector machines, a new chapter on noninductive
inference and alternative learning formulations, and an in-depth discussion of the VC
theoretical approach as it relates to other paradigms.
Complete with over one hundred illustrations, case studies, examples, and chapter
summaries. Learning from Data accommodates both beginning and advanced graduate
students in engineering, computer science, and statistics. It is also indispensable for
researchers and practitioners in these areas who must understand the principles and
methods for learning dependencies from data. |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Cherkassky, Vladimir Mulier, Filip |
author_GND | (DE-588)1102410721 |
author_facet | Cherkassky, Vladimir Mulier, Filip |
author_role | aut aut |
author_sort | Cherkassky, Vladimir |
author_variant | v c vc f m fm |
building | Verbundindex |
bvnumber | BV022473505 |
callnumber-first | T - Technology |
callnumber-label | TK5102 |
callnumber-raw | TK5102.9 |
callnumber-search | TK5102.9 |
callnumber-sort | TK 45102.9 |
callnumber-subject | TK - Electrical and Nuclear Engineering |
classification_rvk | ST 301 SK 850 CM 4000 |
ctrlnum | (OCoLC)76481553 (DE-599)BVBBV022473505 |
dewey-full | 006.3/1 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3/1 |
dewey-search | 006.3/1 |
dewey-sort | 16.3 11 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik Psychologie Mathematik |
discipline_str_mv | Informatik Psychologie Mathematik |
edition | Second edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02191nam a2200517zc 4500</leader><controlfield tag="001">BV022473505</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240221 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">070620s2007 xxu|||| |||| 00||| eng d</controlfield><datafield tag="010" ind1=" " ind2=" "><subfield code="a">2006038736</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780471681823</subfield><subfield code="9">978-0-471-68182-3</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0471681822</subfield><subfield code="c">cloth</subfield><subfield code="9">0-471-68182-2</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)76481553</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV022473505</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">aacr</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">xxu</subfield><subfield code="c">US</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-473</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-706</subfield><subfield code="a">DE-29T</subfield><subfield code="a">DE-945</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK5102.9</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/1</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 301</subfield><subfield code="0">(DE-625)143651:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 850</subfield><subfield code="0">(DE-625)143263:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CM 4000</subfield><subfield code="0">(DE-625)18951:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Cherkassky, Vladimir</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1102410721</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Learning from data</subfield><subfield code="b">concepts, theory, and methods</subfield><subfield code="c">Vladimir Cherkassky ; Filip Mulier</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Hoboken, NJ</subfield><subfield code="b">Wiley</subfield><subfield code="c">[2007]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2007</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XVIII, 538 Seiten</subfield><subfield code="b">Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Methode</subfield><subfield code="0">(DE-588)4038971-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenauswertung</subfield><subfield code="0">(DE-588)4131193-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Datenauswertung</subfield><subfield code="0">(DE-588)4131193-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Methode</subfield><subfield code="0">(DE-588)4038971-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mulier, Filip</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2=" "><subfield code="u">http://www.loc.gov/catdir/toc/ecip075/2006038736.html</subfield><subfield code="3">Table of contents only</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-015680951</subfield></datafield></record></collection> |
id | DE-604.BV022473505 |
illustrated | Not Illustrated |
index_date | 2024-07-02T17:45:29Z |
indexdate | 2024-07-09T20:58:22Z |
institution | BVB |
isbn | 9780471681823 0471681822 |
language | English |
lccn | 2006038736 |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-015680951 |
oclc_num | 76481553 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-703 DE-706 DE-29T DE-945 |
owner_facet | DE-355 DE-BY-UBR DE-473 DE-BY-UBG DE-703 DE-706 DE-29T DE-945 |
physical | XVIII, 538 Seiten Diagramme |
publishDate | 2007 |
publishDateSearch | 2007 |
publishDateSort | 2007 |
publisher | Wiley |
record_format | marc |
spelling | Cherkassky, Vladimir Verfasser (DE-588)1102410721 aut Learning from data concepts, theory, and methods Vladimir Cherkassky ; Filip Mulier Second edition Hoboken, NJ Wiley [2007] © 2007 XVIII, 538 Seiten Diagramme txt rdacontent n rdamedia nc rdacarrier Methode (DE-588)4038971-6 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Datenauswertung (DE-588)4131193-0 gnd rswk-swf Datenauswertung (DE-588)4131193-0 s Methode (DE-588)4038971-6 s DE-604 Maschinelles Lernen (DE-588)4193754-5 s Mulier, Filip Verfasser aut http://www.loc.gov/catdir/toc/ecip075/2006038736.html Table of contents only Digitalisierung UB Regensburg application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis Digitalisierung UB Regensburg application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA Klappentext |
spellingShingle | Cherkassky, Vladimir Mulier, Filip Learning from data concepts, theory, and methods Methode (DE-588)4038971-6 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Datenauswertung (DE-588)4131193-0 gnd |
subject_GND | (DE-588)4038971-6 (DE-588)4193754-5 (DE-588)4131193-0 |
title | Learning from data concepts, theory, and methods |
title_auth | Learning from data concepts, theory, and methods |
title_exact_search | Learning from data concepts, theory, and methods |
title_exact_search_txtP | Learning from data concepts, theory, and methods |
title_full | Learning from data concepts, theory, and methods Vladimir Cherkassky ; Filip Mulier |
title_fullStr | Learning from data concepts, theory, and methods Vladimir Cherkassky ; Filip Mulier |
title_full_unstemmed | Learning from data concepts, theory, and methods Vladimir Cherkassky ; Filip Mulier |
title_short | Learning from data |
title_sort | learning from data concepts theory and methods |
title_sub | concepts, theory, and methods |
topic | Methode (DE-588)4038971-6 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Datenauswertung (DE-588)4131193-0 gnd |
topic_facet | Methode Maschinelles Lernen Datenauswertung |
url | http://www.loc.gov/catdir/toc/ecip075/2006038736.html http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000003&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015680951&sequence=000004&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT cherkasskyvladimir learningfromdataconceptstheoryandmethods AT mulierfilip learningfromdataconceptstheoryandmethods |