The elements of statistical learning: data mining, inference, and prediction
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
New York, NY [u.a.]
Springer
©2001
|
Ausgabe: | Corr. as of the 4. printing, [Nachdr.] |
Schriftenreihe: | Springer series in statistics
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | XVI, 533 S. Ill., graph. Darst. |
ISBN: | 9780387952840 0387952845 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV022511798 | ||
003 | DE-604 | ||
005 | 20090402 | ||
007 | t | ||
008 | 070716s2001 xxuad|| |||| 00||| eng d | ||
016 | 7 | |a 962876348 |2 DE-101 | |
020 | |a 9780387952840 |9 978-0-387-95284-0 | ||
020 | |a 0387952845 |9 0-387-95284-5 | ||
035 | |a (OCoLC)51681104 | ||
035 | |a (DE-599)BVBBV022511798 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
044 | |a xxu |c XD-US | ||
049 | |a DE-703 |a DE-19 |a DE-91G |a DE-521 | ||
050 | 0 | |a Q325.75H37 2001 | |
082 | 0 | |a 006.3/1 |2 21 | |
082 | 0 | |a 006.3/1 21 | |
084 | |a CW 5000 |0 (DE-625)19182: |2 rvk | ||
084 | |a QH 231 |0 (DE-625)141546: |2 rvk | ||
084 | |a SK 830 |0 (DE-625)143259: |2 rvk | ||
084 | |a 15 |2 sdnb | ||
084 | |a MAT 620f |2 stub | ||
084 | |a 27 |2 sdnb | ||
084 | |a 28 |2 sdnb | ||
084 | |a DAT 708f |2 stub | ||
100 | 1 | |a Hastie, Trevor |d 1953- |e Verfasser |0 (DE-588)172128242 |4 aut | |
245 | 1 | 0 | |a The elements of statistical learning |b data mining, inference, and prediction |c Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
250 | |a Corr. as of the 4. printing, [Nachdr.] | ||
264 | 1 | |a New York, NY [u.a.] |b Springer |c ©2001 | |
300 | |a XVI, 533 S. |b Ill., graph. Darst. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Springer series in statistics | |
650 | 4 | |a Apprentissage supervisé (Intelligence artificielle) | |
650 | 7 | |a Data mining |2 gtt | |
650 | 7 | |a Machine-learning |2 gtt | |
650 | 7 | |a Prognoses |2 gtt | |
650 | 4 | |a Supervised learning (Machine learning) | |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 4 | |a Statistik |9 rswk-swf |
650 | 0 | 7 | |a Statistik |0 (DE-588)4056995-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenanalyse |0 (DE-588)4123037-1 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 0 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Statistik |A s |
689 | 1 | 1 | |a Datenanalyse |0 (DE-588)4123037-1 |D s |
689 | 1 | |8 1\p |5 DE-604 | |
700 | 1 | |a Tibshirani, Robert |d 1956- |e Verfasser |0 (DE-588)172417740 |4 aut | |
700 | 1 | |a Friedman, Jerome H. |d 1939- |e Verfasser |0 (DE-588)134071484 |4 aut | |
856 | 4 | 2 | |m DNB Datenaustausch |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015718680&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-015718680 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804136614927007744 |
---|---|
adam_text | TREVOR HASTIE
ROBERT TIBSHIRANI
JEROME FRIEDMAN
THE ELEMENTS OF
STATISTICAL LEARNING
DATA MINING, INFERENCE, AND PREDICTION
WITH 200 FULL-COLOR ILLUSTRATIONS
SPRINGER
CONTENTS
PREFACE VII
1 INTRODUCTION 1
2 OVERVIEW OF SUPERVISED LEARNING 9
2.1 INTRODUCTION 9
2.2 VARIABLE TYPES AND TERMINOLOGY 9
2.3 TWO SIMPLE APPROACHES TO PREDICTION: LEAST SQUARES AND
NEAREST NEIGHBORS 11
2.3.1 LINEAR MODELS AND LEAST SQUARES 11
2.3.2 NEAREST-NEIGHBOR METHODS 14
2.3.3 FROM LEAST SQUARES TO NEAREST NEIGHBORS 16
2.4 STATISTICAL DECISION THEORY 18
2.5 LOCAL METHODS IN HIGH DIMENSIONS 22
2.6 STATISTICAL MODELS, SUPERVISED LEARNING AND FUNCTION
APPROXIMATION 28
2.6.1 A STATISTICAL MODEL FOR THE JOINT DISTRIBUTION
PR(X,Y) 28
2.6.2 SUPERVISED LEARNING 29
2.6.3 FUNCTION APPROXIMATION 29
2.7 STRUCTURED REGRESSION MODELS 32
2.7.1 DIFFICULTY OF THE PROBLEM 32
2.8 CLASSES OF RESTRICTED ESTIMATORS 33
2.8.1 ROUGHNESS PENALTY AND BAYESIAN METHODS 34
CONTENTS
2.8.2 KERNEL METHODS AND LOCAL REGRESSION 34
2.8.3 BASIS FUNCTIONS AND DICTIONARY METHODS 35
2.9 MODEL SELECTION AND THE BIAS-VARIANCE TRADEOFF 37
BIBLIOGRAPHIC NOTES 39
EXERCISES 39
LINEAR METHODS FOR REGRESSION 41
3.1 INTRODUCTION 41
3.2 LINEAR REGRESSION MODELS AND LEAST SQUARES 42
3.2.1 EXAMPLE: PROSTATE CANCER 47
3.2.2 THE GAUSS-MARKOV THEOREM 49
3.3 MULTIPLE REGRESSION FROM SIMPLE UNIVARIATE REGRESSION ... 50
3.3.1 MULTIPLE OUTPUTS 54
3.4 SUBSET SELECTION AND COEFFICIENT SHRINKAGE 55
3.4.1 SUBSET SELECTION 55
3.4.2 PROSTATE CANCER DATA EXAMPLE (CONTINUED) 57
3.4.3 SHRINKAGE METHODS 59
3.4.4 METHODS USING DERIVED INPUT DIRECTIONS 66
3.4.5 DISCUSSION: A COMPARISON OF THE SELECTION AND
SHRINKAGE METHODS 68
3.4.6 MULTIPLE OUTCOME SHRINKAGE AND SELECTION 73
3.5 COMPUTATIONAL CONSIDERATIONS 75
BIBLIOGRAPHIC NOTES 75
EXERCISES 75
LINEAR METHODS FOR CLASSIFICATION 79
4.1 INTRODUCTION 79
4.2 LINEAR REGRESSION OF AN INDICATOR MATRIX 81
4.3 LINEAR DISCRIMINANT ANALYSIS 84
4.3.1 REGULARIZED DISCRIMINANT ANALYSIS 90
4.3.2 COMPUTATIONS FOR LDA 91
4.3.3 REDUCED-RANK LINEAR DISCRIMINANT ANALYSIS .... 91
4.4 LOGISTIC REGRESSION 95
4.4.1 FITTING LOGISTIC REGRESSION MODELS 98
4.4.2 EXAMPLE: SOUTH AFRICAN HEART DISEASE 100
4.4.3 QUADRATIC APPROXIMATIONS AND INFERENCE 102
4.4.4 LOGISTIC REGRESSION OR LDA? 103
4.5 SEPARATING HYPERPLANES 105
4.5.1 ROSENBLATT S PERCEPTRON LEARNING ALGORITHM .... 107
4.5.2 OPTIMAL SEPARATING HYPERPLANES 108
BIBLIOGRAPHIC NOTES ILL
EXERCISES ILL
CONTENTS XI
5 BASIS EXPANSIONS AND REGULARIZATION 115
5.1 INTRODUCTION 115
5.2 PIECEWISE POLYNOMIALS AND SPLINES 117
5.2.1 NATURAL CUBIC SPLINES 120
5.2.2 EXAMPLE: SOUTH AFRICAN HEART DISEASE (CONTINUED) . 122
5.2.3 EXAMPLE: PHONEME RECOGNITION 124
5.3 FILTERING AND FEATURE EXTRACTION 126
5.4 SMOOTHING SPLINES 127
5.4.1 DEGREES OF FREEDOM AND SMOOTHER MATRICES 129
5.5 AUTOMATIC SELECTION OF THE SMOOTHING PARAMETERS 134
5.5.1 FIXING THE DEGREES OF FREEDOM 134
5.5.2 THE BIAS-VARIANCE TRADEOFF 134
5.6 NONPARAMETRIC LOGISTIC REGRESSION 137
5.7 MULTIDIMENSIONAL SPLINES 138
5.8 REGULARIZATION AND REPRODUCING KERNEL HILBERT SPACES . . . 144
5.8.1 SPACES OF FUNCTIONS GENERATED BY KERNELS 144
5.8.2 EXAMPLES OF RKHS 146
5.9 WAVELET SMOOTHING 148
5.9.1 WAVELET BASES AND THE WAVELET TRANSFORM 150
5.9.2 ADAPTIVE WAVELET FILTERING 153
BIBLIOGRAPHIC NOTES 155
EXERCISES 155
APPENDIX: COMPUTATIONAL CONSIDERATIONS FOR SPLINES 160
APPENDIX: B-SPLINES 160
APPENDIX: COMPUTATIONS FOR SMOOTHING SPLINES 163
6 KERNEL METHODS 165
6.1 ONE-DIMENSIONAL KERNEL SMOOTHERS 165
6.1.1 LOCAL LINEAR REGRESSION 168
6.1.2 LOCAL POLYNOMIAL REGRESSION 171
6.2 SELECTING THE WIDTH OF THE KERNEL 172
6.3 LOCAL REGRESSION IN IR
P
174
6.4 STRUCTURED LOCAL REGRESSION MODELS IN IR
P
175
6.4.1 STRUCTURED KERNELS 177
6.4.2 STRUCTURED REGRESSION FUNCTIONS 177
6.5 LOCAL LIKELIHOOD AND OTHER MODELS 179
6.6 KERNEL DENSITY ESTIMATION AND CLASSIFICATION 182
6.6.1 KERNEL DENSITY ESTIMATION 182
6.6.2 KERNEL DENSITY CLASSIFICATION 184
6.6.3 THE NAIVE BAYES CLASSIFIER 184
6.7 RADIAL BASIS FUNCTIONS AND KERNELS 186
6.8 MIXTURE MODELS FOR DENSITY ESTIMATION AND CLASSIFICATION . 188
6.9 COMPUTATIONAL CONSIDERATIONS 190
BIBLIOGRAPHIC NOTES 190
EXERCISES 190
XII CONTENTS
7 MODEL ASSESSMENT AND SELECTION 193
7.1 INTRODUCTION 193
7.2 BIAS, VARIANCE AND MODEL COMPLEXITY 193
7.3 THE BIAS-VARIANCE DECOMPOSITION 196
7.3.1 EXAMPLE: BIAS-VARIANCE TRADEOFF 198
7.4 OPTIMISM OF THE TRAINING ERROR RATE 200
7.5 ESTIMATES OF IN-SAMPLE PREDICTION ERROR 203
7.6 THE EFFECTIVE NUMBER OF PARAMETERS 205
7.7 THE BAYESIAN APPROACH AND BIC 206
7.8 MINIMUM DESCRIPTION LENGTH 208
7.9 VAPNIK-CHERNOVENKIS DIMENSION 210
7.9.1 EXAMPLE (CONTINUED) 212
7.10 CROSS-VALIDATION 214
7.11 BOOTSTRAP METHODS 217
7.11.1 EXAMPLE (CONTINUED) 220
BIBLIOGRAPHIC NOTES 222
EXERCISES 222
8 MODEL INFERENCE AND AVERAGING 225
8.1 INTRODUCTION 225
8.2 THE BOOTSTRAP AND MAXIMUM LIKELIHOOD METHODS 225
8.2.1 A SMOOTHING EXAMPLE 225
8.2.2 MAXIMUM LIKELIHOOD INFERENCE 229
8.2.3 BOOTSTRAP VERSUS MAXIMUM LIKELIHOOD 231
8.3 BAYESIAN METHODS 231
8.4 RELATIONSHIP BETWEEN THE BOOTSTRAP
AND BAYESIAN INFERENCE 235
8.5 THE EM ALGORITHM 236
8.5.1 TWO-COMPONENT MIXTURE MODEL 236
8.5.2 THE EM ALGORITHM IN GENERAL 240
8.5.3 EM AS A MAXIMIZATION-MAXIMIZATION PROCEDURE . . 241
8.6 MCMC FOR SAMPLING FROM THE POSTERIOR 243
8.7 BAGGING 246
8.7.1 EXAMPLE: TREES WITH SIMULATED DATA 247
8.8 MODEL AVERAGING AND STACKING 250
8.9 STOCHASTIC SEARCH: BUMPING 253
BIBLIOGRAPHIC NOTES 254
EXERCISES 255
9 ADDITIVE MODELS, TREES, AND RELATED METHODS 257
9.1 GENERALIZED ADDITIVE MODELS 257
9.1.1 FITTING ADDITIVE MODELS 259
9.1.2 EXAMPLE: ADDITIVE LOGISTIC REGRESSION 261
9.1.3 SUMMARY 266
9.2 TREE-BASED METHODS 266
CONTENTS XIII
9.2.1 BACKGROUND 266
9.2.2 REGRESSION TREES 267
9.2.3 CLASSIFICATION TREES 270
9.2.4 OTHER ISSUES 272
9.2.5 SPAM EXAMPLE (CONTINUED) 275
9.3 PRIM*BUMP HUNTING 279
9.3.1 SPAM EXAMPLE (CONTINUED) 282
9.4 MARS: MULTIVARIATE ADAPTIVE REGRESSION SPLINES 283
9.4.1 SPAM EXAMPLE (CONTINUED) 287
9.4.2 EXAMPLE (SIMULATED DATA) 288
9.4.3 OTHER ISSUES 289
9.5 HIERARCHICAL MIXTURES OF EXPERTS 290
9.6 MISSING DATA 293
9.7 COMPUTATIONAL CONSIDERATIONS 295
BIBLIOGRAPHIC NOTES 295
EXERCISES 296
10 BOOSTING AND ADDITIVE TREES 299
10.1 BOOSTING METHODS 299
10.1.1 OUTLINE OF THIS CHAPTER 302
10.2 BOOSTING FITS AN ADDITIVE MODEL 303
10.3 FORWARD STAGEWISE ADDITIVE MODELING 304
10.4 EXPONENTIAL LOSS AND ADABOOST 305
10.5 WHY EXPONENTIAL LOSS? 306
10.6 LOSS FUNCTIONS AND ROBUSTNESS 308
10.7 OFF-THE-SHELF PROCEDURES FOR DATA MINING 312
10.8 EXAMPLE*SPAM DATA 314
10.9 BOOSTING TREES 316
10.10 NUMERICAL OPTIMIZATION 319
10.10.1 STEEPEST DESCENT 320
10.10.2 GRADIENT BOOSTING 320
10.10.3 MART 322
10.11 RIGHT-SIZED TREES FOR BOOSTING 323
10.12 REGULARIZATION 324
10.12.1 SHRINKAGE 326
10.12.2 PENALIZED REGRESSION 328
10.12.3 VIRTUES OF THE L
X
PENALTY (LASSO) OVER L
2
330
10.13 INTERPRETATION 331
10.13.1 RELATIVE IMPORTANCE OF PREDICTOR VARIABLES .... 331
10.13.2 PARTIAL DEPENDENCE PLOTS 333
10.14 ILLUSTRATIONS 335
10.14.1 CALIFORNIA HOUSING 335
10.14.2 DEMOGRAPHICS DATA 339
BIBLIOGRAPHIC NOTES 340
EXERCISES 344
XIV CONTENTS
11 NEURAL NETWORKS 347
11.1 INTRODUCTION 347
11.2 PROJECTION PURSUIT REGRESSION 347
11.3 NEURAL NETWORKS 350
11.4 FITTING NEURAL NETWORKS 353
11.5 SOME ISSUES IN TRAINING NEURAL NETWORKS 355
11.5.1 STARTING VALUES 355
11.5.2 OVERRATING 356
11.5.3 SCALING OF THE INPUTS 358
11.5.4 NUMBER OF HIDDEN UNITS AND LAYERS 358
11.5.5 MULTIPLE MINIMA 359
11.6 EXAMPLE: SIMULATED DATA 359
11.7 EXAMPLE: ZIP CODE DATA 362
11.8 DISCUSSION 366
11.9 COMPUTATIONAL CONSIDERATIONS 367
BIBLIOGRAPHIC NOTES 367
EXERCISES 369
12 SUPPORT VECTOR MACHINES AND
FLEXIBLE DISCRIMINANTS 371
12.1 INTRODUCTION 371
12.2 THE SUPPORT VECTOR CLASSIFIER 371
12.2.1 COMPUTING THE SUPPORT VECTOR CLASSIFIER 373
12.2.2 MIXTURE EXAMPLE (CONTINUED) 375
12.3 SUPPORT VECTOR MACHINES 377
12.3.1 COMPUTING THE SVM FOR CLASSIFICATION 377
12.3.2 THE SVM AS A PENALIZATION METHOD 380
12.3.3 FUNCTION ESTIMATION AND REPRODUCING KERNELS . . . 381
12.3.4 SVMS AND THE CURSE OF DIMENSIONALITY 384
12.3.5 SUPPORT VECTOR MACHINES FOR REGRESSION 385
12.3.6 REGRESSION AND KERNELS 387
12.3.7 DISCUSSION 389
12.4 GENERALIZING LINEAR DISCRIMINANT ANALYSIS 390
12.5 FLEXIBLE DISCRIMINANT ANALYSIS 391
12.5.1 COMPUTING THE FDA ESTIMATES 394
12.6 PENALIZED DISCRIMINANT ANALYSIS 397
12.7 MIXTURE DISCRIMINANT ANALYSIS 399
12.7.1 EXAMPLE: WAVEFORM DATA 402
BIBLIOGRAPHIC NOTES 406
EXERCISES 406
CONTENTS XV
13 PROTOTYPE METHODS AND NEAREST-NEIGHBORS 411
13.1 INTRODUCTION 411
13.2 PROTOTYPE METHODS 411
13.2.1 IF-MEANS CLUSTERING 412
13.2.2 LEARNING VECTOR QUANTIZATION 414
13.2.3 GAUSSIAN MIXTURES 415
13.3 ;-NEAREST-NEIGHBOR CLASSIFIERS 415
13.3.1 EXAMPLE: A COMPARATIVE STUDY 420
13.3.2 EXAMPLE: FC-NEAREST-NEIGHBORS AND IMAGE SCENE
CLASSIFICATION 422
13.3.3 INVARIANT METRICS AND TANGENT DISTANCE 423
13.4 ADAPTIVE NEAREST-NEIGHBOR METHODS 427
13.4.1 EXAMPLE 430
13.4.2 GLOBAL DIMENSION REDUCTION FOR NEAREST-NEIGHBORS . 431
13.5 COMPUTATIONAL CONSIDERATIONS 432
BIBLIOGRAPHIC NOTES 433
EXERCISES 433
14 UNSUPERVISED LEARNING 437
14.1 INTRODUCTION 437
14.2 ASSOCIATION RULES 439
14.2.1 MARKET BASKET ANALYSIS 440
14.2.2 THE APRIORI ALGORITHM 441
14.2.3 EXAMPLE: MARKET BASKET ANALYSIS 444
14.2.4 UNSUPERVISED AS SUPERVISED LEARNING 447
14.2.5 GENERALIZED ASSOCIATION RULES 449
14.2.6 CHOICE OF SUPERVISED LEARNING METHOD 451
14.2.7 EXAMPLE: MARKET BASKET ANALYSIS (CONTINUED) . . . 451
14.3 CLUSTER ANALYSIS 453
14.3.1 PROXIMITY MATRICES 455
14.3.2 DISSIMILARITIES BASED ON ATTRIBUTES 455
14.3.3 OBJECT DISSIMILARITY 457
14.3.4 CLUSTERING ALGORITHMS 459
14.3.5 COMBINATORIAL ALGORITHMS 460
14.3.6 K-MEANS 461
14.3.7 GAUSSIAN MIXTURES AS SOFT IST-MEANS CLUSTERING . . . 463
14.3.8 EXAMPLE: HUMAN TUMOR MICROARRAY DATA 463
14.3.9 VECTOR QUANTIZATION 466
14.3.10 IF-MEDOIDS 468
14.3.11 PRACTICAL ISSUES 470
14.3.12 HIERARCHICAL CLUSTERING 472
14.4 SELF-ORGANIZING MAPS 480
14.5 PRINCIPAL COMPONENTS, CURVES AND SURFACES 485
14.5.1 PRINCIPAL COMPONENTS 485
14.5.2 PRINCIPAL CURVES AND SURFACES 491
XVI CONTENTS
14.6 INDEPENDENT COMPONENT ANALYSIS AND EXPLORATORY
PROJECTION PURSUIT 494
14.6.1 LATENT VARIABLES AND FACTOR ANALYSIS 494
14.6.2 INDEPENDENT COMPONENT ANALYSIS 496
14.6.3 EXPLORATORY PROJECTION PURSUIT 500
14.6.4 A DIFFERENT APPROACH TO ICA 500
14.7 MULTIDIMENSIONAL SCALING 502
BIBLIOGRAPHIC NOTES 503
EXERCISES 504
REFERENCES 509
AUTHOR INDEX 523
INDEX 527
|
adam_txt |
TREVOR HASTIE
ROBERT TIBSHIRANI
JEROME FRIEDMAN
THE ELEMENTS OF
STATISTICAL LEARNING
DATA MINING, INFERENCE, AND PREDICTION
WITH 200 FULL-COLOR ILLUSTRATIONS
SPRINGER
CONTENTS
PREFACE VII
1 INTRODUCTION 1
2 OVERVIEW OF SUPERVISED LEARNING 9
2.1 INTRODUCTION 9
2.2 VARIABLE TYPES AND TERMINOLOGY 9
2.3 TWO SIMPLE APPROACHES TO PREDICTION: LEAST SQUARES AND
NEAREST NEIGHBORS 11
2.3.1 LINEAR MODELS AND LEAST SQUARES 11
2.3.2 NEAREST-NEIGHBOR METHODS 14
2.3.3 FROM LEAST SQUARES TO NEAREST NEIGHBORS 16
2.4 STATISTICAL DECISION THEORY 18
2.5 LOCAL METHODS IN HIGH DIMENSIONS 22
2.6 STATISTICAL MODELS, SUPERVISED LEARNING AND FUNCTION
APPROXIMATION 28
2.6.1 A STATISTICAL MODEL FOR THE JOINT DISTRIBUTION
PR(X,Y) 28
2.6.2 SUPERVISED LEARNING 29
2.6.3 FUNCTION APPROXIMATION 29
2.7 STRUCTURED REGRESSION MODELS 32
2.7.1 DIFFICULTY OF THE PROBLEM 32
2.8 CLASSES OF RESTRICTED ESTIMATORS 33
2.8.1 ROUGHNESS PENALTY AND BAYESIAN METHODS 34
CONTENTS
2.8.2 KERNEL METHODS AND LOCAL REGRESSION 34
2.8.3 BASIS FUNCTIONS AND DICTIONARY METHODS 35
2.9 MODEL SELECTION AND THE BIAS-VARIANCE TRADEOFF 37
BIBLIOGRAPHIC NOTES 39
EXERCISES 39
LINEAR METHODS FOR REGRESSION 41
3.1 INTRODUCTION 41
3.2 LINEAR REGRESSION MODELS AND LEAST SQUARES 42
3.2.1 EXAMPLE: PROSTATE CANCER 47
3.2.2 THE GAUSS-MARKOV THEOREM 49
3.3 MULTIPLE REGRESSION FROM SIMPLE UNIVARIATE REGRESSION . 50
3.3.1 MULTIPLE OUTPUTS 54
3.4 SUBSET SELECTION AND COEFFICIENT SHRINKAGE 55
3.4.1 SUBSET SELECTION 55
3.4.2 PROSTATE CANCER DATA EXAMPLE (CONTINUED) 57
3.4.3 SHRINKAGE METHODS 59
3.4.4 METHODS USING DERIVED INPUT DIRECTIONS 66
3.4.5 DISCUSSION: A COMPARISON OF THE SELECTION AND
SHRINKAGE METHODS 68
3.4.6 MULTIPLE OUTCOME SHRINKAGE AND SELECTION 73
3.5 COMPUTATIONAL CONSIDERATIONS 75
BIBLIOGRAPHIC NOTES 75
EXERCISES 75
LINEAR METHODS FOR CLASSIFICATION 79
4.1 INTRODUCTION 79
4.2 LINEAR REGRESSION OF AN INDICATOR MATRIX 81
4.3 LINEAR DISCRIMINANT ANALYSIS 84
4.3.1 REGULARIZED DISCRIMINANT ANALYSIS 90
4.3.2 COMPUTATIONS FOR LDA 91
4.3.3 REDUCED-RANK LINEAR DISCRIMINANT ANALYSIS . 91
4.4 LOGISTIC REGRESSION 95
4.4.1 FITTING LOGISTIC REGRESSION MODELS 98
4.4.2 EXAMPLE: SOUTH AFRICAN HEART DISEASE 100
4.4.3 QUADRATIC APPROXIMATIONS AND INFERENCE 102
4.4.4 LOGISTIC REGRESSION OR LDA? 103
4.5 SEPARATING HYPERPLANES 105
4.5.1 ROSENBLATT'S PERCEPTRON LEARNING ALGORITHM . 107
4.5.2 OPTIMAL SEPARATING HYPERPLANES 108
BIBLIOGRAPHIC NOTES ILL
EXERCISES ILL
CONTENTS XI
5 BASIS EXPANSIONS AND REGULARIZATION 115
5.1 INTRODUCTION 115
5.2 PIECEWISE POLYNOMIALS AND SPLINES 117
5.2.1 NATURAL CUBIC SPLINES 120
5.2.2 EXAMPLE: SOUTH AFRICAN HEART DISEASE (CONTINUED) . 122
5.2.3 EXAMPLE: PHONEME RECOGNITION 124
5.3 FILTERING AND FEATURE EXTRACTION 126
5.4 SMOOTHING SPLINES 127
5.4.1 DEGREES OF FREEDOM AND SMOOTHER MATRICES 129
5.5 AUTOMATIC SELECTION OF THE SMOOTHING PARAMETERS 134
5.5.1 FIXING THE DEGREES OF FREEDOM 134
5.5.2 THE BIAS-VARIANCE TRADEOFF 134
5.6 NONPARAMETRIC LOGISTIC REGRESSION 137
5.7 MULTIDIMENSIONAL SPLINES 138
5.8 REGULARIZATION AND REPRODUCING KERNEL HILBERT SPACES . . . 144
5.8.1 SPACES OF FUNCTIONS GENERATED BY KERNELS 144
5.8.2 EXAMPLES OF RKHS 146
5.9 WAVELET SMOOTHING 148
5.9.1 WAVELET BASES AND THE WAVELET TRANSFORM 150
5.9.2 ADAPTIVE WAVELET FILTERING 153
BIBLIOGRAPHIC NOTES 155
EXERCISES 155
APPENDIX: COMPUTATIONAL CONSIDERATIONS FOR SPLINES 160
APPENDIX: B-SPLINES 160
APPENDIX: COMPUTATIONS FOR SMOOTHING SPLINES 163
6 KERNEL METHODS 165
6.1 ONE-DIMENSIONAL KERNEL SMOOTHERS 165
6.1.1 LOCAL LINEAR REGRESSION 168
6.1.2 LOCAL POLYNOMIAL REGRESSION 171
6.2 SELECTING THE WIDTH OF THE KERNEL 172
6.3 LOCAL REGRESSION IN IR
P
174
6.4 STRUCTURED LOCAL REGRESSION MODELS IN IR
P
175
6.4.1 STRUCTURED KERNELS 177
6.4.2 STRUCTURED REGRESSION FUNCTIONS 177
6.5 LOCAL LIKELIHOOD AND OTHER MODELS 179
6.6 KERNEL DENSITY ESTIMATION AND CLASSIFICATION 182
6.6.1 KERNEL DENSITY ESTIMATION 182
6.6.2 KERNEL DENSITY CLASSIFICATION 184
6.6.3 THE NAIVE BAYES CLASSIFIER 184
6.7 RADIAL BASIS FUNCTIONS AND KERNELS 186
6.8 MIXTURE MODELS FOR DENSITY ESTIMATION AND CLASSIFICATION . 188
6.9 COMPUTATIONAL CONSIDERATIONS 190
BIBLIOGRAPHIC NOTES 190
EXERCISES 190
XII CONTENTS
7 MODEL ASSESSMENT AND SELECTION 193
7.1 INTRODUCTION 193
7.2 BIAS, VARIANCE AND MODEL COMPLEXITY 193
7.3 THE BIAS-VARIANCE DECOMPOSITION 196
7.3.1 EXAMPLE: BIAS-VARIANCE TRADEOFF 198
7.4 OPTIMISM OF THE TRAINING ERROR RATE 200
7.5 ESTIMATES OF IN-SAMPLE PREDICTION ERROR 203
7.6 THE EFFECTIVE NUMBER OF PARAMETERS 205
7.7 THE BAYESIAN APPROACH AND BIC 206
7.8 MINIMUM DESCRIPTION LENGTH 208
7.9 VAPNIK-CHERNOVENKIS DIMENSION 210
7.9.1 EXAMPLE (CONTINUED) 212
7.10 CROSS-VALIDATION 214
7.11 BOOTSTRAP METHODS 217
7.11.1 EXAMPLE (CONTINUED) 220
BIBLIOGRAPHIC NOTES 222
EXERCISES 222
8 MODEL INFERENCE AND AVERAGING 225
8.1 INTRODUCTION 225
8.2 THE BOOTSTRAP AND MAXIMUM LIKELIHOOD METHODS 225
8.2.1 A SMOOTHING EXAMPLE 225
8.2.2 MAXIMUM LIKELIHOOD INFERENCE 229
8.2.3 BOOTSTRAP VERSUS MAXIMUM LIKELIHOOD 231
8.3 BAYESIAN METHODS 231
8.4 RELATIONSHIP BETWEEN THE BOOTSTRAP
AND BAYESIAN INFERENCE 235
8.5 THE EM ALGORITHM 236
8.5.1 TWO-COMPONENT MIXTURE MODEL 236
8.5.2 THE EM ALGORITHM IN GENERAL 240
8.5.3 EM AS A MAXIMIZATION-MAXIMIZATION PROCEDURE . . 241
8.6 MCMC FOR SAMPLING FROM THE POSTERIOR 243
8.7 BAGGING 246
8.7.1 EXAMPLE: TREES WITH SIMULATED DATA 247
8.8 MODEL AVERAGING AND STACKING 250
8.9 STOCHASTIC SEARCH: BUMPING 253
BIBLIOGRAPHIC NOTES 254
EXERCISES 255
9 ADDITIVE MODELS, TREES, AND RELATED METHODS 257
9.1 GENERALIZED ADDITIVE MODELS 257
9.1.1 FITTING ADDITIVE MODELS 259
9.1.2 EXAMPLE: ADDITIVE LOGISTIC REGRESSION 261
9.1.3 SUMMARY 266
9.2 TREE-BASED METHODS 266
CONTENTS XIII
9.2.1 BACKGROUND 266
9.2.2 REGRESSION TREES 267
9.2.3 CLASSIFICATION TREES 270
9.2.4 OTHER ISSUES 272
9.2.5 SPAM EXAMPLE (CONTINUED) 275
9.3 PRIM*BUMP HUNTING 279
9.3.1 SPAM EXAMPLE (CONTINUED) 282
9.4 MARS: MULTIVARIATE ADAPTIVE REGRESSION SPLINES 283
9.4.1 SPAM EXAMPLE (CONTINUED) 287
9.4.2 EXAMPLE (SIMULATED DATA) 288
9.4.3 OTHER ISSUES 289
9.5 HIERARCHICAL MIXTURES OF EXPERTS 290
9.6 MISSING DATA 293
9.7 COMPUTATIONAL CONSIDERATIONS 295
BIBLIOGRAPHIC NOTES 295
EXERCISES 296
10 BOOSTING AND ADDITIVE TREES 299
10.1 BOOSTING METHODS 299
10.1.1 OUTLINE OF THIS CHAPTER 302
10.2 BOOSTING FITS AN ADDITIVE MODEL 303
10.3 FORWARD STAGEWISE ADDITIVE MODELING 304
10.4 EXPONENTIAL LOSS AND ADABOOST 305
10.5 WHY EXPONENTIAL LOSS? 306
10.6 LOSS FUNCTIONS AND ROBUSTNESS 308
10.7 "OFF-THE-SHELF" PROCEDURES FOR DATA MINING 312
10.8 EXAMPLE*SPAM DATA 314
10.9 BOOSTING TREES 316
10.10 NUMERICAL OPTIMIZATION 319
10.10.1 STEEPEST DESCENT 320
10.10.2 GRADIENT BOOSTING 320
10.10.3 MART 322
10.11 RIGHT-SIZED TREES FOR BOOSTING 323
10.12 REGULARIZATION 324
10.12.1 SHRINKAGE 326
10.12.2 PENALIZED REGRESSION 328
10.12.3 VIRTUES OF THE L
X
PENALTY (LASSO) OVER L
2
330
10.13 INTERPRETATION 331
10.13.1 RELATIVE IMPORTANCE OF PREDICTOR VARIABLES . 331
10.13.2 PARTIAL DEPENDENCE PLOTS 333
10.14 ILLUSTRATIONS 335
10.14.1 CALIFORNIA HOUSING 335
10.14.2 DEMOGRAPHICS DATA 339
BIBLIOGRAPHIC NOTES 340
EXERCISES 344
XIV CONTENTS
11 NEURAL NETWORKS 347
11.1 INTRODUCTION 347
11.2 PROJECTION PURSUIT REGRESSION 347
11.3 NEURAL NETWORKS 350
11.4 FITTING NEURAL NETWORKS 353
11.5 SOME ISSUES IN TRAINING NEURAL NETWORKS 355
11.5.1 STARTING VALUES 355
11.5.2 OVERRATING 356
11.5.3 SCALING OF THE INPUTS 358
11.5.4 NUMBER OF HIDDEN UNITS AND LAYERS 358
11.5.5 MULTIPLE MINIMA 359
11.6 EXAMPLE: SIMULATED DATA 359
11.7 EXAMPLE: ZIP CODE DATA 362
11.8 DISCUSSION 366
11.9 COMPUTATIONAL CONSIDERATIONS 367
BIBLIOGRAPHIC NOTES 367
EXERCISES 369
12 SUPPORT VECTOR MACHINES AND
FLEXIBLE DISCRIMINANTS 371
12.1 INTRODUCTION 371
12.2 THE SUPPORT VECTOR CLASSIFIER 371
12.2.1 COMPUTING THE SUPPORT VECTOR CLASSIFIER 373
12.2.2 MIXTURE EXAMPLE (CONTINUED) 375
12.3 SUPPORT VECTOR MACHINES 377
12.3.1 COMPUTING THE SVM FOR CLASSIFICATION 377
12.3.2 THE SVM AS A PENALIZATION METHOD 380
12.3.3 FUNCTION ESTIMATION AND REPRODUCING KERNELS . . . 381
12.3.4 SVMS AND THE CURSE OF DIMENSIONALITY 384
12.3.5 SUPPORT VECTOR MACHINES FOR REGRESSION 385
12.3.6 REGRESSION AND KERNELS 387
12.3.7 DISCUSSION 389
12.4 GENERALIZING LINEAR DISCRIMINANT ANALYSIS 390
12.5 FLEXIBLE DISCRIMINANT ANALYSIS 391
12.5.1 COMPUTING THE FDA ESTIMATES 394
12.6 PENALIZED DISCRIMINANT ANALYSIS 397
12.7 MIXTURE DISCRIMINANT ANALYSIS 399
12.7.1 EXAMPLE: WAVEFORM DATA 402
BIBLIOGRAPHIC NOTES 406
EXERCISES 406
CONTENTS XV
13 PROTOTYPE METHODS AND NEAREST-NEIGHBORS 411
13.1 INTRODUCTION 411
13.2 PROTOTYPE METHODS 411
13.2.1 IF-MEANS CLUSTERING 412
13.2.2 LEARNING VECTOR QUANTIZATION 414
13.2.3 GAUSSIAN MIXTURES 415
13.3 ;-NEAREST-NEIGHBOR CLASSIFIERS 415
13.3.1 EXAMPLE: A COMPARATIVE STUDY 420
13.3.2 EXAMPLE: FC-NEAREST-NEIGHBORS AND IMAGE SCENE
CLASSIFICATION 422
13.3.3 INVARIANT METRICS AND TANGENT DISTANCE 423
13.4 ADAPTIVE NEAREST-NEIGHBOR METHODS 427
13.4.1 EXAMPLE 430
13.4.2 GLOBAL DIMENSION REDUCTION FOR NEAREST-NEIGHBORS . 431
13.5 COMPUTATIONAL CONSIDERATIONS 432
BIBLIOGRAPHIC NOTES 433
EXERCISES 433
14 UNSUPERVISED LEARNING 437
14.1 INTRODUCTION 437
14.2 ASSOCIATION RULES 439
14.2.1 MARKET BASKET ANALYSIS 440
14.2.2 THE APRIORI ALGORITHM 441
14.2.3 EXAMPLE: MARKET BASKET ANALYSIS 444
14.2.4 UNSUPERVISED AS SUPERVISED LEARNING 447
14.2.5 GENERALIZED ASSOCIATION RULES 449
14.2.6 CHOICE OF SUPERVISED LEARNING METHOD 451
14.2.7 EXAMPLE: MARKET BASKET ANALYSIS (CONTINUED) . . . 451
14.3 CLUSTER ANALYSIS 453
14.3.1 PROXIMITY MATRICES 455
14.3.2 DISSIMILARITIES BASED ON ATTRIBUTES 455
14.3.3 OBJECT DISSIMILARITY 457
14.3.4 CLUSTERING ALGORITHMS 459
14.3.5 COMBINATORIAL ALGORITHMS 460
14.3.6 K-MEANS 461
14.3.7 GAUSSIAN MIXTURES AS SOFT IST-MEANS CLUSTERING . . . 463
14.3.8 EXAMPLE: HUMAN TUMOR MICROARRAY DATA 463
14.3.9 VECTOR QUANTIZATION 466
14.3.10 IF-MEDOIDS 468
14.3.11 PRACTICAL ISSUES 470
14.3.12 HIERARCHICAL CLUSTERING 472
14.4 SELF-ORGANIZING MAPS 480
14.5 PRINCIPAL COMPONENTS, CURVES AND SURFACES 485
14.5.1 PRINCIPAL COMPONENTS 485
14.5.2 PRINCIPAL CURVES AND SURFACES 491
XVI CONTENTS
14.6 INDEPENDENT COMPONENT ANALYSIS AND EXPLORATORY
PROJECTION PURSUIT 494
14.6.1 LATENT VARIABLES AND FACTOR ANALYSIS 494
14.6.2 INDEPENDENT COMPONENT ANALYSIS 496
14.6.3 EXPLORATORY PROJECTION PURSUIT 500
14.6.4 A DIFFERENT APPROACH TO ICA 500
14.7 MULTIDIMENSIONAL SCALING 502
BIBLIOGRAPHIC NOTES 503
EXERCISES 504
REFERENCES 509
AUTHOR INDEX 523
INDEX 527 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- |
author_GND | (DE-588)172128242 (DE-588)172417740 (DE-588)134071484 |
author_facet | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- |
author_role | aut aut aut |
author_sort | Hastie, Trevor 1953- |
author_variant | t h th r t rt j h f jh jhf |
building | Verbundindex |
bvnumber | BV022511798 |
callnumber-first | Q - Science |
callnumber-label | Q325 |
callnumber-raw | Q325.75H37 2001 |
callnumber-search | Q325.75H37 2001 |
callnumber-sort | Q 3325.75 H37 42001 |
callnumber-subject | Q - General Science |
classification_rvk | CW 5000 QH 231 SK 830 |
classification_tum | MAT 620f DAT 708f |
ctrlnum | (OCoLC)51681104 (DE-599)BVBBV022511798 |
dewey-full | 006.3/1 006.3/121 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3/1 006.3/1 21 |
dewey-search | 006.3/1 006.3/1 21 |
dewey-sort | 16.3 11 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik Psychologie Mathematik Wirtschaftswissenschaften |
discipline_str_mv | Informatik Psychologie Mathematik Wirtschaftswissenschaften |
edition | Corr. as of the 4. printing, [Nachdr.] |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02597nam a2200673 c 4500</leader><controlfield tag="001">BV022511798</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20090402 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">070716s2001 xxuad|| |||| 00||| eng d</controlfield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">962876348</subfield><subfield code="2">DE-101</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780387952840</subfield><subfield code="9">978-0-387-95284-0</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0387952845</subfield><subfield code="9">0-387-95284-5</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)51681104</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV022511798</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">xxu</subfield><subfield code="c">XD-US</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-703</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-91G</subfield><subfield code="a">DE-521</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">Q325.75H37 2001</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/1</subfield><subfield code="2">21</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3/1 21</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CW 5000</subfield><subfield code="0">(DE-625)19182:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 231</subfield><subfield code="0">(DE-625)141546:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 830</subfield><subfield code="0">(DE-625)143259:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">15</subfield><subfield code="2">sdnb</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MAT 620f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">27</subfield><subfield code="2">sdnb</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">28</subfield><subfield code="2">sdnb</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hastie, Trevor</subfield><subfield code="d">1953-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172128242</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">The elements of statistical learning</subfield><subfield code="b">data mining, inference, and prediction</subfield><subfield code="c">Trevor Hastie ; Robert Tibshirani ; Jerome Friedman</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Corr. as of the 4. printing, [Nachdr.]</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York, NY [u.a.]</subfield><subfield code="b">Springer</subfield><subfield code="c">©2001</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">XVI, 533 S.</subfield><subfield code="b">Ill., graph. Darst.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Springer series in statistics</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Apprentissage supervisé (Intelligence artificielle)</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Data mining</subfield><subfield code="2">gtt</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Machine-learning</subfield><subfield code="2">gtt</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Prognoses</subfield><subfield code="2">gtt</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Supervised learning (Machine learning)</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="4"><subfield code="a">Statistik</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Statistik</subfield><subfield code="A">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tibshirani, Robert</subfield><subfield code="d">1956-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172417740</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Friedman, Jerome H.</subfield><subfield code="d">1939-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)134071484</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">DNB Datenaustausch</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015718680&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-015718680</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
id | DE-604.BV022511798 |
illustrated | Illustrated |
index_date | 2024-07-02T17:59:39Z |
indexdate | 2024-07-09T20:59:12Z |
institution | BVB |
isbn | 9780387952840 0387952845 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-015718680 |
oclc_num | 51681104 |
open_access_boolean | |
owner | DE-703 DE-19 DE-BY-UBM DE-91G DE-BY-TUM DE-521 |
owner_facet | DE-703 DE-19 DE-BY-UBM DE-91G DE-BY-TUM DE-521 |
physical | XVI, 533 S. Ill., graph. Darst. |
publishDate | 2001 |
publishDateSearch | 2001 |
publishDateSort | 2001 |
publisher | Springer |
record_format | marc |
series2 | Springer series in statistics |
spelling | Hastie, Trevor 1953- Verfasser (DE-588)172128242 aut The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman Corr. as of the 4. printing, [Nachdr.] New York, NY [u.a.] Springer ©2001 XVI, 533 S. Ill., graph. Darst. txt rdacontent n rdamedia nc rdacarrier Springer series in statistics Apprentissage supervisé (Intelligence artificielle) Data mining gtt Machine-learning gtt Prognoses gtt Supervised learning (Machine learning) Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Statistik rswk-swf Statistik (DE-588)4056995-0 gnd rswk-swf Datenanalyse (DE-588)4123037-1 gnd rswk-swf Statistik (DE-588)4056995-0 s Maschinelles Lernen (DE-588)4193754-5 s DE-604 Statistik s Datenanalyse (DE-588)4123037-1 s 1\p DE-604 Tibshirani, Robert 1956- Verfasser (DE-588)172417740 aut Friedman, Jerome H. 1939- Verfasser (DE-588)134071484 aut DNB Datenaustausch application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015718680&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- The elements of statistical learning data mining, inference, and prediction Apprentissage supervisé (Intelligence artificielle) Data mining gtt Machine-learning gtt Prognoses gtt Supervised learning (Machine learning) Maschinelles Lernen (DE-588)4193754-5 gnd Statistik Statistik (DE-588)4056995-0 gnd Datenanalyse (DE-588)4123037-1 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)4056995-0 (DE-588)4123037-1 |
title | The elements of statistical learning data mining, inference, and prediction |
title_auth | The elements of statistical learning data mining, inference, and prediction |
title_exact_search | The elements of statistical learning data mining, inference, and prediction |
title_exact_search_txtP | The elements of statistical learning data mining, inference, and prediction |
title_full | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_fullStr | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_full_unstemmed | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_short | The elements of statistical learning |
title_sort | the elements of statistical learning data mining inference and prediction |
title_sub | data mining, inference, and prediction |
topic | Apprentissage supervisé (Intelligence artificielle) Data mining gtt Machine-learning gtt Prognoses gtt Supervised learning (Machine learning) Maschinelles Lernen (DE-588)4193754-5 gnd Statistik Statistik (DE-588)4056995-0 gnd Datenanalyse (DE-588)4123037-1 gnd |
topic_facet | Apprentissage supervisé (Intelligence artificielle) Data mining Machine-learning Prognoses Supervised learning (Machine learning) Maschinelles Lernen Statistik Datenanalyse |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=015718680&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT hastietrevor theelementsofstatisticallearningdatamininginferenceandprediction AT tibshiranirobert theelementsofstatisticallearningdatamininginferenceandprediction AT friedmanjeromeh theelementsofstatisticallearningdatamininginferenceandprediction |