An introduction to statistical learning: with applications in R
An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. Thi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
New York, NY, USA
Springer
[2021]
|
Ausgabe: | Second edition |
Schriftenreihe: | Springer texts in statistics
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. |
Beschreibung: | xv, 607 Seiten Illustrationen, Diagramme |
ISBN: | 9781071614174 9781071614204 1071614177 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047210622 | ||
003 | DE-604 | ||
005 | 20240110 | ||
007 | t | ||
008 | 210324s2021 a||| |||| 00||| eng d | ||
020 | |a 9781071614174 |c hbk. : ca. EUR 90.94 (DE) |9 978-1-0716-1417-4 | ||
020 | |a 9781071614204 |q pbk. |9 978-1-0716-1420-4 | ||
020 | |a 1071614177 |9 1-0716-1417-7 | ||
035 | |a (OCoLC)1257808451 | ||
035 | |a (DE-599)HBZHT020705326 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-473 |a DE-N2 |a DE-188 |a DE-83 |a DE-521 |a DE-860 |a DE-573 |a DE-634 |a DE-1028 |a DE-824 |a DE-20 |a DE-19 |a DE-1050 |a DE-91G |a DE-355 |a DE-1043 |a DE-11 |a DE-703 |a DE-Aug4 |a DE-945 | ||
050 | 0 | |a QA276 .I585 | |
050 | 0 | |a QA276 | |
082 | 0 | |a 519.5 | |
084 | |a SK 840 |0 (DE-625)143261: |2 rvk | ||
084 | |a SK 830 |0 (DE-625)143259: |2 rvk | ||
084 | |a QH 231 |0 (DE-625)141546: |2 rvk | ||
084 | |a ST 601 |0 (DE-625)143682: |2 rvk | ||
084 | |a CM 4000 |0 (DE-625)18951: |2 rvk | ||
084 | |a MR 2100 |0 (DE-625)123488: |2 rvk | ||
084 | |a ST 250 |0 (DE-625)143626: |2 rvk | ||
084 | |a 62-04 |2 msc | ||
084 | |a 62H30 |2 msc | ||
084 | |a DAT 307 |2 stub | ||
084 | |a MAT 620 |2 stub | ||
084 | |a 68T05 |2 msc | ||
100 | 1 | |a James, Gareth |e Verfasser |0 (DE-588)1038457327 |4 aut | |
245 | 1 | 0 | |a An introduction to statistical learning |b with applications in R |c Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani |
250 | |a Second edition | ||
259 | |a 2 | ||
264 | 1 | |a New York, NY, USA |b Springer |c [2021] | |
264 | 4 | |c © 2021 | |
300 | |a xv, 607 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Springer texts in statistics | |
520 | 3 | |a An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. | |
650 | 0 | 7 | |a R |g Programm |0 (DE-588)4705956-4 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Statistik |0 (DE-588)4056995-0 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 0 | 1 | |a R |g Programm |0 (DE-588)4705956-4 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 1 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 1 | |5 DE-604 | |
700 | 1 | |a Witten, Daniela |e Verfasser |0 (DE-588)108120849X |4 aut | |
700 | 1 | |a Hastie, Trevor |d 1953- |e Verfasser |0 (DE-588)172128242 |4 aut | |
700 | 1 | |a Tibshirani, Robert |d 1956- |e Verfasser |0 (DE-588)172417740 |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-1-4614-7138-7 |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-1-0716-1418-1 |w (DE-604)BV047420621 |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032615432&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-032615432 |
Datensatz im Suchindex
_version_ | 1805087622603735040 |
---|---|
adam_text |
Contents Preface vii 1 Introduction 2 Statistical Learning 2.1 What Is Statistical Learning?. 2.1.1 Why Estimate ƒ?. 2.1.2 How Do We Estimate ƒ?. 2.1.3 The Trade-Off Between Prediction Accuracy and Model Interpretability. 2.1.4 Supervised Versus Unsupervised Learning. 2.1.5 Regression Versus Classification Problems. 2.2 Assessing Model Accuracy . 2.2.1 Measuring the Quality of Fit . 2.2.2 The Bias-Variance Trade-Off. 2.2.3 The Classification Setting. 2.3 Lab: Introduction to R. 2.3.1 Basic Commands. 2.3.2 Graphics. 2.3.3 Indexing Data. 2.3.4 Loading Data. 2.3.5 Additional Graphical and Numerical Summaries . . 2.4 Exercises. 15 15 17 21 Linear Regression 3.1 Simple Linear Regression. 3.1.1 Estimating the Coefficients . 3.1.2 Assessing the Accuracy of the
Coefficient Estimates. 3.1.3 Assessing the Accuracy of the Model. 3.2 Multiple Linear Regression. 3.2.1 Estimating the Regression Coefficients. 3.2.2 Some Important Questions. 3.3 Other Considerations in the RegressionModel. 59 60 61 3 1 24 26 28 29 29 33 37 42 43 45 47 48 50 52 63 68 71 72 75 83 ix
X CONTENTS 3.4 3.5 3.6 3.7 3.3.1 Qualitative Predictors. 83 3.3.2 Extensions of the Linear Model. 87 3.3.3 Potential Problems. 92 The Marketing Plan. 103 Comparison of LinearRegression with K-Nearest Neighbors. 105 Lab: Linear Regression. 110 3.6.1 Libraries.110 3.6.2 Simple Linear Regression. . Ill 3.6.3 Multiple Linear Regression.114 3.6.4 Interaction Terms. 116 3.6.5 Non-linear Transformations of the Predictors . . . 116 3.6.6 Qualitative Predictors. 119 3.6.7 Writing Functions. 120 Exercises. 121 4 Classification 129 4.1 An Overview of Classification. 130 4.2 Why Not Linear Regression?. 131 4.3 Logistic Regression . 133 4.3.1 The Logistic
Model. 133 4.3.2 Estimating the Regression Coefficients. 135 4.3.3 Making Predictions. 136 4.3.4 Multiple Logistic Regression. 137 4.3.5 Multinomial Logistic Regression. 140 4.4 Generative Models for Classification. 141 4.4.1 Linear Discriminant Analysis for p = 1.142 4.4.2 Linear Discriminant Analysis for p 1.145 4.4.3 Quadratic Discriminant Analysis. 152 4.4.4 Naive Bayes. 153 4.5 A Comparison of ClassificationMethods . 158 4.5.1 An Analytical Comparison. 158 4.5.2 An Empirical Comparison. 161 4.6 Generalized Linear Models. 164 4.6.1 Linear Regression on the Bikeshare Data. 164 4.6.2 Poisson Regression on the Bikeshare Data. 167 4.6.3 Generalized Linear Models in Greater Generality . 170 4.7 Lab: Classification Methods. 171 4.7.1 The Stock Market Data. 171 4.7.2 Logistic Regression. 172 4.7.3 Linear Discriminant
Analysis.177 4.7.4 Quadratic Discriminant Analysis.179 4.7.5 Naive Bayes. 180 4.7.6 JGNearest Neighbors. 181 4.7.7 Poisson Regression. 185
CONTENTS 4.8 xi Exercises.189 5 Resampling Methods 197 5.1 Cross-Validation. 198 5.1.1 The Validation Set Approach.198 5.1.2 Leave-One-Out Cross-Validation . 200 5.1.3 fc֊Fold Cross-Validation . 203 5.1.4 Bias-Variance Trade-Off for fc-Fold Cross-Validation . 205 5.1.5 Cross-Validation on ClassificationProblems . 206 5.2 The Bootstrap. 209 5.3 Lab: Cross-Validation and the Bootstrap. 212 5.3.1 The Validation Set Approach.213 5.3.2 Leave-One-Out Cross-Validation . 214 5.3.3 fc-Fold Cross-Validation . 215 5.3.4 The Bootstrap . 216 5.4 Exercises.219 6 Linear Model Selection and Regularization 225 6.1 Subset Selection. 227 6.1.1 Best Subset Selection. 227 6.1.2 Stepwise Selection . 229 6.1.3
Choosing the Optimal Model . 232 6.2 Shrinkage Methods . 237 6.2.1 Ridge Regression. 237 6.2.2 The Lasso. 241 6.2.3 Selecting the Tuning Parameter. 250 6.3 Dimension Reduction Methods. 251 6.3.1 Principal Components Regression. 252 6.3.2 Partial Least Squares. 259 6.4 Considerations in High Dimensions . 261 6.4.1 High-Dimensional Data. 261 6.4.2 What Goes Wrong in High Dimensions?.262 6.4.3 Regression in High Dimensions. 264 6.4.4 Interpreting Results in High Dimensions. 266 6.5 Lab: Linear Models and RegularizationMethods. 267 6.5.1 Subset Selection Methods. 267 6.5.2 Ridge Regression and the Lasso. 274 6.5.3 PCR and PLS Regression. 279 6.6 Exercises. 282 7 Moving Beyond Linearity 289 7.1 Polynomial Regression. 290 7.2 Step
Functions. 292 7.3 Basis Functions . 294
CONTENTS xii 7.4 7.5 7.6 7.7 7.8 7.9 8 Tree-Based Methods 8.1 8.2 8.3 8.4 9 Regression Splines. 295 7.4.1 Piecewise Polynomials. 295 7.4.2 Constraints and Splines . 295 7.4.3 The Spline Basis Representation . 297 7.4.4 Choosing the Number and Locations of the Knots. 298 7.4.5 Comparison to Polynomial Regression. 300 Smoothing Splines. 301 7.5.1 An Overview of Smoothing Splines. 301 7.5.2 Choosing the Smoothing Parameter λ . 302 Local Regression. 304 Generalized Additive Models.306 7.7.1 G AMs for Regression Problems. 307 7.7.2 G AMs for Classification Problems. 310 Lab: Non-linear Modeling. 311 7.8.1 Polynomial Regression and Step Functions. 312 7.8.2 Splines. 317 7.8.3 GAMs . 318
Exercises. 321 Support Vector Machines 9.1 327 The Basics of Decision Trees.327 8.1.1 Regression Trees . 328 8.1.2 Classification Trees. 335 8.1.3 Trees Versus Linear Models. 338 8.1.4 Advantages and Disadvantages of Trees . 339 Bagging, Random Forests, Boosting, and Bayesian Additive Regression Trees. 340 8.2.1 Bagging. ;. 340 8.2.2 Random Forests. 343 8.2.3 Boosting.345 8.2.4 Bayesian Additive Regression Trees.348 8.2.5 Summary of Tree Ensemble Methods. 351 Lab: Decision Trees. 353 8.3.1 Fitting Classification Trees. 353 8.3.2 Fitting Regression Trees. 356 8.3.3 Bagging and Random Forests. 357 8.3.4 Boosting. 359 8.3.5 Bayesian Additive Regression
Trees. 360 Exercises. 361 367 Maximal Margin Classifier . 368 9.1.1 What Is a Hyperplane?. 368 9.1.2 Classification Using a Separating Hyperplane . . . 369
CONTENTS 9.2 9.3 9.4 9.5 9.6 9.7 9.1.3 The Maximal Margin Classifier. 371 9.1.4 Construction of the Maximal Margin Classifier . . 372 9.1.5 The Non-separable Case. 373 Support Vector Classifiers. 373 9.2.1 Overview of the Support Vector Classifier. 373 9.2.2 Details of the Support Vector Classifier. 375 Support Vector Machines.379 9.3.1 Classification with Non-Linear Decision Boundaries . 379 9.3.2 The Support Vector Machine . 380 9.3.3 An Application to the Heart Disease Data. 383 SVMs with More than Two Classes. 385 9.4.1 One-Versus-One Classification. 385 9.4.2 One-Versus-All Classification .385 Relationship to Logistic Regression . 386 Lab: Support Vector Machines. 388 9.6.1 Support Vector Classifier.389 9.6.2 Support Vector Machine. 392 9.6.3 ROC Curves.394 9.6.4 SVM with Multiple Classes. 396 9.6.5 Application to Gene Expression
Data. 396 Exercises.398 10 Deep Learning 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 xiii 403 Single Layer Neural Networks . 404 Multilayer Neural Networks. 407 Convolutional Neural Networks. 411 10.3.1 Convolution Layers. 412 10.3.2 Pooling Layers .415 10.3.3 Architecture of a Convolutional Neural Network . . 415 10.3.4 Data Augmentation. 417 10.3.5 Results Using a Pretrained Classifier. 417 Document Classification.419 Recurrent Neural Networks.421 10.5.1 Sequential Models for Document Classification . . 424 10.5.2 Time Series Forecasting. 427 10.5.3 Summary of RNNs. 431 When to Use Deep Learning. 432 Fitting a Neural Network. 434 10.7.1 Backpropagation.435 10.7.2 Regularization and Stochastic Gradient Descent . . 436 10.7.3 Dropout
Learning.438 10.7.4 Network Tuning.438 Interpolation and Double Descent. 439 Lab: Deep Learning. 443
xiv CONTENTS 10.9.1 A Single Layer Network on the HittersData . 443 10.9.2 A Multilayer Network on the MNIST DigitData . 445 10.9.3 Convolutional Neural Networks. 448 10.9.4 Using Pretrained CNN Models . 451 10.9.5 IMDb Document Classification. 452 10.9.6 Recurrent Neural Networks. 454 10.10 Exercises. 458 11 Survival Analysis and Censored Data 461 11.1 Survival and Censoring Times. 462 11.2 A Closer Look at Censoring. 463 11.3 The Kaplan-Meier Survival Curve. 464 11.4 The Log-Rank Test. 466 11.5 Regression Models With a Survival Response. 469 11.5.1 The Hazard Function. 469 11.5.2 Proportional Hazards. 471 11.5.3 Example: Brain Cancer Data. 475 11.5.4 Example: Publication Data.475 11.6 Shrinkage for the Cox Model. 478 11.7 Additional Topics. 480 11.7.1 Area Under the Curve for Survival Analysis. 480 11.7.2 Choice of Time
Scale. 481 11.7.3 Time-Dependent Covariates.481 11.7.4 Checking the Proportional HazardsAssumption . . 482 11.7.5 Survival Trees.482 11.8 Lab: Survival Analysis. 483 11.8.1 Brain Cancer Data. 483 11.8.2 Publication Data. 486 11.8.3 Call Center Data. 487 11.9 Exercises. 490 12 Unsupervised Learning 497 12.1 The Challenge of Unsupervised Learning. 497 12.2 Principal Components Analysis. 498 12.2.1 What Are Principal Components?. 499 12.2.2 Another Interpretation of PrincipalComponents . 503 12.2.3 The Proportion of Variance Explained. 505 12.2.4 More on PCA. 507 12.2.5 Other Uses for Principal Components. 510 12.3 Missing Values and Matrix Completion. 510 12.4 Clustering Methods. 516 12.4.1 А-Means Clustering. 517 12.4.2 Hierarchical
Clustering. 521 12.4.3 Practical Issues in Clustering. 530 12.5 Lab: Unsupervised Learning . 532
CONTENTS xv 12.5.1 Principal Components Analysis. 532 12.5.2 Matrix Completion.535 12.5.3 Clustering. 538 12.5.4 NCI60 Data Example.542 12.6 Exercises. 548 13 Multiple Testing 553 13.1 A Quick Review of Hypothesis Testing . 554 13.1.1 Testing a Hypothesis. 555 13.1.2 Type I and Type II Errors. 559 13.2 The Challenge of Multiple Testing. 560 13.3 The Family-Wise Error Rate. 561 13.3.1 What is the Family-Wise Error Rate? . 562 13.3.2 Approaches to Control the Family-Wise Error Rate 564 13.3.3 Trade-Off Between the FWER and Power. 570 13.4 The False Discovery Rate. 571 13.4.1 Intuition for the False Discovery Rate . 571 13.4.2 The Benjamini-Hochberg Procedure . 573 13.5 A Re-Sampling Approach to p-Values and False Discovery Rates. 575 13.5.1 A Re-Sampling Approach to the p-Value. 576 13.5.2 A Re-Sampling Approach to the False Discovery
Rate578 13.5.3 When Are Re-Sampling Approaches Useful? . 581 13.6 Lab: Multiple Testing. 582 13.6.1 Review of Hypothesis Tests. 582 13.6.2 The Family-Wise Error Rate. 583 13.6.3 The False Discovery Rate. 586 13.6.4 A Re-Sampling Approach. 588 13.7 Exercises. 591 Index 597 |
adam_txt |
Contents Preface vii 1 Introduction 2 Statistical Learning 2.1 What Is Statistical Learning?. 2.1.1 Why Estimate ƒ?. 2.1.2 How Do We Estimate ƒ?. 2.1.3 The Trade-Off Between Prediction Accuracy and Model Interpretability. 2.1.4 Supervised Versus Unsupervised Learning. 2.1.5 Regression Versus Classification Problems. 2.2 Assessing Model Accuracy . 2.2.1 Measuring the Quality of Fit . 2.2.2 The Bias-Variance Trade-Off. 2.2.3 The Classification Setting. 2.3 Lab: Introduction to R. 2.3.1 Basic Commands. 2.3.2 Graphics. 2.3.3 Indexing Data. 2.3.4 Loading Data. 2.3.5 Additional Graphical and Numerical Summaries . . 2.4 Exercises. 15 15 17 21 Linear Regression 3.1 Simple Linear Regression. 3.1.1 Estimating the Coefficients . 3.1.2 Assessing the Accuracy of the
Coefficient Estimates. 3.1.3 Assessing the Accuracy of the Model. 3.2 Multiple Linear Regression. 3.2.1 Estimating the Regression Coefficients. 3.2.2 Some Important Questions. 3.3 Other Considerations in the RegressionModel. 59 60 61 3 1 24 26 28 29 29 33 37 42 43 45 47 48 50 52 63 68 71 72 75 83 ix
X CONTENTS 3.4 3.5 3.6 3.7 3.3.1 Qualitative Predictors. 83 3.3.2 Extensions of the Linear Model. 87 3.3.3 Potential Problems. 92 The Marketing Plan. 103 Comparison of LinearRegression with K-Nearest Neighbors. 105 Lab: Linear Regression. 110 3.6.1 Libraries.110 3.6.2 Simple Linear Regression. . Ill 3.6.3 Multiple Linear Regression.114 3.6.4 Interaction Terms. 116 3.6.5 Non-linear Transformations of the Predictors . . . 116 3.6.6 Qualitative Predictors. 119 3.6.7 Writing Functions. 120 Exercises. 121 4 Classification 129 4.1 An Overview of Classification. 130 4.2 Why Not Linear Regression?. 131 4.3 Logistic Regression . 133 4.3.1 The Logistic
Model. 133 4.3.2 Estimating the Regression Coefficients. 135 4.3.3 Making Predictions. 136 4.3.4 Multiple Logistic Regression. 137 4.3.5 Multinomial Logistic Regression. 140 4.4 Generative Models for Classification. 141 4.4.1 Linear Discriminant Analysis for p = 1.142 4.4.2 Linear Discriminant Analysis for p 1.145 4.4.3 Quadratic Discriminant Analysis. 152 4.4.4 Naive Bayes. 153 4.5 A Comparison of ClassificationMethods . 158 4.5.1 An Analytical Comparison. 158 4.5.2 An Empirical Comparison. 161 4.6 Generalized Linear Models. 164 4.6.1 Linear Regression on the Bikeshare Data. 164 4.6.2 Poisson Regression on the Bikeshare Data. 167 4.6.3 Generalized Linear Models in Greater Generality . 170 4.7 Lab: Classification Methods. 171 4.7.1 The Stock Market Data. 171 4.7.2 Logistic Regression. 172 4.7.3 Linear Discriminant
Analysis.177 4.7.4 Quadratic Discriminant Analysis.179 4.7.5 Naive Bayes. 180 4.7.6 JGNearest Neighbors. 181 4.7.7 Poisson Regression. 185
CONTENTS 4.8 xi Exercises.189 5 Resampling Methods 197 5.1 Cross-Validation. 198 5.1.1 The Validation Set Approach.198 5.1.2 Leave-One-Out Cross-Validation . 200 5.1.3 fc֊Fold Cross-Validation . 203 5.1.4 Bias-Variance Trade-Off for fc-Fold Cross-Validation . 205 5.1.5 Cross-Validation on ClassificationProblems . 206 5.2 The Bootstrap. 209 5.3 Lab: Cross-Validation and the Bootstrap. 212 5.3.1 The Validation Set Approach.213 5.3.2 Leave-One-Out Cross-Validation . 214 5.3.3 fc-Fold Cross-Validation . 215 5.3.4 The Bootstrap . 216 5.4 Exercises.219 6 Linear Model Selection and Regularization 225 6.1 Subset Selection. 227 6.1.1 Best Subset Selection. 227 6.1.2 Stepwise Selection . 229 6.1.3
Choosing the Optimal Model . 232 6.2 Shrinkage Methods . 237 6.2.1 Ridge Regression. 237 6.2.2 The Lasso. 241 6.2.3 Selecting the Tuning Parameter. 250 6.3 Dimension Reduction Methods. 251 6.3.1 Principal Components Regression. 252 6.3.2 Partial Least Squares. 259 6.4 Considerations in High Dimensions . 261 6.4.1 High-Dimensional Data. 261 6.4.2 What Goes Wrong in High Dimensions?.262 6.4.3 Regression in High Dimensions. 264 6.4.4 Interpreting Results in High Dimensions. 266 6.5 Lab: Linear Models and RegularizationMethods. 267 6.5.1 Subset Selection Methods. 267 6.5.2 Ridge Regression and the Lasso. 274 6.5.3 PCR and PLS Regression. 279 6.6 Exercises. 282 7 Moving Beyond Linearity 289 7.1 Polynomial Regression. 290 7.2 Step
Functions. 292 7.3 Basis Functions . 294
CONTENTS xii 7.4 7.5 7.6 7.7 7.8 7.9 8 Tree-Based Methods 8.1 8.2 8.3 8.4 9 Regression Splines. 295 7.4.1 Piecewise Polynomials. 295 7.4.2 Constraints and Splines . 295 7.4.3 The Spline Basis Representation . 297 7.4.4 Choosing the Number and Locations of the Knots. 298 7.4.5 Comparison to Polynomial Regression. 300 Smoothing Splines. 301 7.5.1 An Overview of Smoothing Splines. 301 7.5.2 Choosing the Smoothing Parameter λ . 302 Local Regression. 304 Generalized Additive Models.306 7.7.1 G AMs for Regression Problems. 307 7.7.2 G AMs for Classification Problems. 310 Lab: Non-linear Modeling. 311 7.8.1 Polynomial Regression and Step Functions. 312 7.8.2 Splines. 317 7.8.3 GAMs . 318
Exercises. 321 Support Vector Machines 9.1 327 The Basics of Decision Trees.327 8.1.1 Regression Trees . 328 8.1.2 Classification Trees. 335 8.1.3 Trees Versus Linear Models. 338 8.1.4 Advantages and Disadvantages of Trees . 339 Bagging, Random Forests, Boosting, and Bayesian Additive Regression Trees. 340 8.2.1 Bagging. ;. 340 8.2.2 Random Forests. 343 8.2.3 Boosting.345 8.2.4 Bayesian Additive Regression Trees.348 8.2.5 Summary of Tree Ensemble Methods. 351 Lab: Decision Trees. 353 8.3.1 Fitting Classification Trees. 353 8.3.2 Fitting Regression Trees. 356 8.3.3 Bagging and Random Forests. 357 8.3.4 Boosting. 359 8.3.5 Bayesian Additive Regression
Trees. 360 Exercises. 361 367 Maximal Margin Classifier . 368 9.1.1 What Is a Hyperplane?. 368 9.1.2 Classification Using a Separating Hyperplane . . . 369
CONTENTS 9.2 9.3 9.4 9.5 9.6 9.7 9.1.3 The Maximal Margin Classifier. 371 9.1.4 Construction of the Maximal Margin Classifier . . 372 9.1.5 The Non-separable Case. 373 Support Vector Classifiers. 373 9.2.1 Overview of the Support Vector Classifier. 373 9.2.2 Details of the Support Vector Classifier. 375 Support Vector Machines.379 9.3.1 Classification with Non-Linear Decision Boundaries . 379 9.3.2 The Support Vector Machine . 380 9.3.3 An Application to the Heart Disease Data. 383 SVMs with More than Two Classes. 385 9.4.1 One-Versus-One Classification. 385 9.4.2 One-Versus-All Classification .385 Relationship to Logistic Regression . 386 Lab: Support Vector Machines. 388 9.6.1 Support Vector Classifier.389 9.6.2 Support Vector Machine. 392 9.6.3 ROC Curves.394 9.6.4 SVM with Multiple Classes. 396 9.6.5 Application to Gene Expression
Data. 396 Exercises.398 10 Deep Learning 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 xiii 403 Single Layer Neural Networks . 404 Multilayer Neural Networks. 407 Convolutional Neural Networks. 411 10.3.1 Convolution Layers. 412 10.3.2 Pooling Layers .415 10.3.3 Architecture of a Convolutional Neural Network . . 415 10.3.4 Data Augmentation. 417 10.3.5 Results Using a Pretrained Classifier. 417 Document Classification.419 Recurrent Neural Networks.421 10.5.1 Sequential Models for Document Classification . . 424 10.5.2 Time Series Forecasting. 427 10.5.3 Summary of RNNs. 431 When to Use Deep Learning. 432 Fitting a Neural Network. 434 10.7.1 Backpropagation.435 10.7.2 Regularization and Stochastic Gradient Descent . . 436 10.7.3 Dropout
Learning.438 10.7.4 Network Tuning.438 Interpolation and Double Descent. 439 Lab: Deep Learning. 443
xiv CONTENTS 10.9.1 A Single Layer Network on the HittersData . 443 10.9.2 A Multilayer Network on the MNIST DigitData . 445 10.9.3 Convolutional Neural Networks. 448 10.9.4 Using Pretrained CNN Models . 451 10.9.5 IMDb Document Classification. 452 10.9.6 Recurrent Neural Networks. 454 10.10 Exercises. 458 11 Survival Analysis and Censored Data 461 11.1 Survival and Censoring Times. 462 11.2 A Closer Look at Censoring. 463 11.3 The Kaplan-Meier Survival Curve. 464 11.4 The Log-Rank Test. 466 11.5 Regression Models With a Survival Response. 469 11.5.1 The Hazard Function. 469 11.5.2 Proportional Hazards. 471 11.5.3 Example: Brain Cancer Data. 475 11.5.4 Example: Publication Data.475 11.6 Shrinkage for the Cox Model. 478 11.7 Additional Topics. 480 11.7.1 Area Under the Curve for Survival Analysis. 480 11.7.2 Choice of Time
Scale. 481 11.7.3 Time-Dependent Covariates.481 11.7.4 Checking the Proportional HazardsAssumption . . 482 11.7.5 Survival Trees.482 11.8 Lab: Survival Analysis. 483 11.8.1 Brain Cancer Data. 483 11.8.2 Publication Data. 486 11.8.3 Call Center Data. 487 11.9 Exercises. 490 12 Unsupervised Learning 497 12.1 The Challenge of Unsupervised Learning. 497 12.2 Principal Components Analysis. 498 12.2.1 What Are Principal Components?. 499 12.2.2 Another Interpretation of PrincipalComponents . 503 12.2.3 The Proportion of Variance Explained. 505 12.2.4 More on PCA. 507 12.2.5 Other Uses for Principal Components. 510 12.3 Missing Values and Matrix Completion. 510 12.4 Clustering Methods. 516 12.4.1 А-Means Clustering. 517 12.4.2 Hierarchical
Clustering. 521 12.4.3 Practical Issues in Clustering. 530 12.5 Lab: Unsupervised Learning . 532
CONTENTS xv 12.5.1 Principal Components Analysis. 532 12.5.2 Matrix Completion.535 12.5.3 Clustering. 538 12.5.4 NCI60 Data Example.542 12.6 Exercises. 548 13 Multiple Testing 553 13.1 A Quick Review of Hypothesis Testing . 554 13.1.1 Testing a Hypothesis. 555 13.1.2 Type I and Type II Errors. 559 13.2 The Challenge of Multiple Testing. 560 13.3 The Family-Wise Error Rate. 561 13.3.1 What is the Family-Wise Error Rate? . 562 13.3.2 Approaches to Control the Family-Wise Error Rate 564 13.3.3 Trade-Off Between the FWER and Power. 570 13.4 The False Discovery Rate. 571 13.4.1 Intuition for the False Discovery Rate . 571 13.4.2 The Benjamini-Hochberg Procedure . 573 13.5 A Re-Sampling Approach to p-Values and False Discovery Rates. 575 13.5.1 A Re-Sampling Approach to the p-Value. 576 13.5.2 A Re-Sampling Approach to the False Discovery
Rate578 13.5.3 When Are Re-Sampling Approaches Useful? . 581 13.6 Lab: Multiple Testing. 582 13.6.1 Review of Hypothesis Tests. 582 13.6.2 The Family-Wise Error Rate. 583 13.6.3 The False Discovery Rate. 586 13.6.4 A Re-Sampling Approach. 588 13.7 Exercises. 591 Index 597 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | James, Gareth Witten, Daniela Hastie, Trevor 1953- Tibshirani, Robert 1956- |
author_GND | (DE-588)1038457327 (DE-588)108120849X (DE-588)172128242 (DE-588)172417740 |
author_facet | James, Gareth Witten, Daniela Hastie, Trevor 1953- Tibshirani, Robert 1956- |
author_role | aut aut aut aut |
author_sort | James, Gareth |
author_variant | g j gj d w dw t h th r t rt |
building | Verbundindex |
bvnumber | BV047210622 |
callnumber-first | Q - Science |
callnumber-label | QA276 |
callnumber-raw | QA276 .I585 QA276 |
callnumber-search | QA276 .I585 QA276 |
callnumber-sort | QA 3276 I585 |
callnumber-subject | QA - Mathematics |
classification_rvk | SK 840 SK 830 QH 231 ST 601 CM 4000 MR 2100 ST 250 |
classification_tum | DAT 307 MAT 620 |
ctrlnum | (OCoLC)1257808451 (DE-599)HBZHT020705326 |
dewey-full | 519.5 |
dewey-hundreds | 500 - Natural sciences and mathematics |
dewey-ones | 519 - Probabilities and applied mathematics |
dewey-raw | 519.5 |
dewey-search | 519.5 |
dewey-sort | 3519.5 |
dewey-tens | 510 - Mathematics |
discipline | Informatik Soziologie Psychologie Mathematik Wirtschaftswissenschaften |
discipline_str_mv | Informatik Soziologie Psychologie Mathematik Wirtschaftswissenschaften |
edition | Second edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000 c 4500</leader><controlfield tag="001">BV047210622</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240110</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">210324s2021 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781071614174</subfield><subfield code="c">hbk. : ca. EUR 90.94 (DE)</subfield><subfield code="9">978-1-0716-1417-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781071614204</subfield><subfield code="q">pbk.</subfield><subfield code="9">978-1-0716-1420-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1071614177</subfield><subfield code="9">1-0716-1417-7</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1257808451</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)HBZHT020705326</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-473</subfield><subfield code="a">DE-N2</subfield><subfield code="a">DE-188</subfield><subfield code="a">DE-83</subfield><subfield code="a">DE-521</subfield><subfield code="a">DE-860</subfield><subfield code="a">DE-573</subfield><subfield code="a">DE-634</subfield><subfield code="a">DE-1028</subfield><subfield code="a">DE-824</subfield><subfield code="a">DE-20</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-1050</subfield><subfield code="a">DE-91G</subfield><subfield code="a">DE-355</subfield><subfield code="a">DE-1043</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-Aug4</subfield><subfield code="a">DE-945</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA276 .I585</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA276</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">519.5</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 840</subfield><subfield code="0">(DE-625)143261:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 830</subfield><subfield code="0">(DE-625)143259:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 231</subfield><subfield code="0">(DE-625)141546:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 601</subfield><subfield code="0">(DE-625)143682:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CM 4000</subfield><subfield code="0">(DE-625)18951:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MR 2100</subfield><subfield code="0">(DE-625)123488:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 250</subfield><subfield code="0">(DE-625)143626:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">62-04</subfield><subfield code="2">msc</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">62H30</subfield><subfield code="2">msc</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 307</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MAT 620</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">68T05</subfield><subfield code="2">msc</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">James, Gareth</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1038457327</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An introduction to statistical learning</subfield><subfield code="b">with applications in R</subfield><subfield code="c">Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition</subfield></datafield><datafield tag="259" ind1=" " ind2=" "><subfield code="a">2</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York, NY, USA</subfield><subfield code="b">Springer</subfield><subfield code="c">[2021]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2021</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xv, 607 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Springer texts in statistics</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform.</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">R</subfield><subfield code="g">Programm</subfield><subfield code="0">(DE-588)4705956-4</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">R</subfield><subfield code="g">Programm</subfield><subfield code="0">(DE-588)4705956-4</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Witten, Daniela</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)108120849X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hastie, Trevor</subfield><subfield code="d">1953-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172128242</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tibshirani, Robert</subfield><subfield code="d">1956-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172417740</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-1-4614-7138-7</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-1-0716-1418-1</subfield><subfield code="w">(DE-604)BV047420621</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032615432&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032615432</subfield></datafield></record></collection> |
id | DE-604.BV047210622 |
illustrated | Illustrated |
index_date | 2024-07-03T16:54:15Z |
indexdate | 2024-07-20T08:55:04Z |
institution | BVB |
isbn | 9781071614174 9781071614204 1071614177 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032615432 |
oclc_num | 1257808451 |
open_access_boolean | |
owner | DE-473 DE-BY-UBG DE-N2 DE-188 DE-83 DE-521 DE-860 DE-573 DE-634 DE-1028 DE-824 DE-20 DE-19 DE-BY-UBM DE-1050 DE-91G DE-BY-TUM DE-355 DE-BY-UBR DE-1043 DE-11 DE-703 DE-Aug4 DE-945 |
owner_facet | DE-473 DE-BY-UBG DE-N2 DE-188 DE-83 DE-521 DE-860 DE-573 DE-634 DE-1028 DE-824 DE-20 DE-19 DE-BY-UBM DE-1050 DE-91G DE-BY-TUM DE-355 DE-BY-UBR DE-1043 DE-11 DE-703 DE-Aug4 DE-945 |
physical | xv, 607 Seiten Illustrationen, Diagramme |
publishDate | 2021 |
publishDateSearch | 2021 |
publishDateSort | 2021 |
publisher | Springer |
record_format | marc |
series2 | Springer texts in statistics |
spelling | James, Gareth Verfasser (DE-588)1038457327 aut An introduction to statistical learning with applications in R Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani Second edition 2 New York, NY, USA Springer [2021] © 2021 xv, 607 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier Springer texts in statistics An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. R Programm (DE-588)4705956-4 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Statistik (DE-588)4056995-0 gnd rswk-swf Statistik (DE-588)4056995-0 s R Programm (DE-588)4705956-4 s DE-604 Maschinelles Lernen (DE-588)4193754-5 s Witten, Daniela Verfasser (DE-588)108120849X aut Hastie, Trevor 1953- Verfasser (DE-588)172128242 aut Tibshirani, Robert 1956- Verfasser (DE-588)172417740 aut Erscheint auch als Online-Ausgabe 978-1-4614-7138-7 Erscheint auch als Online-Ausgabe 978-1-0716-1418-1 (DE-604)BV047420621 Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032615432&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | James, Gareth Witten, Daniela Hastie, Trevor 1953- Tibshirani, Robert 1956- An introduction to statistical learning with applications in R R Programm (DE-588)4705956-4 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Statistik (DE-588)4056995-0 gnd |
subject_GND | (DE-588)4705956-4 (DE-588)4193754-5 (DE-588)4056995-0 |
title | An introduction to statistical learning with applications in R |
title_auth | An introduction to statistical learning with applications in R |
title_exact_search | An introduction to statistical learning with applications in R |
title_exact_search_txtP | An introduction to statistical learning with applications in R |
title_full | An introduction to statistical learning with applications in R Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani |
title_fullStr | An introduction to statistical learning with applications in R Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani |
title_full_unstemmed | An introduction to statistical learning with applications in R Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani |
title_short | An introduction to statistical learning |
title_sort | an introduction to statistical learning with applications in r |
title_sub | with applications in R |
topic | R Programm (DE-588)4705956-4 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Statistik (DE-588)4056995-0 gnd |
topic_facet | R Programm Maschinelles Lernen Statistik |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032615432&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT jamesgareth anintroductiontostatisticallearningwithapplicationsinr AT wittendaniela anintroductiontostatisticallearningwithapplicationsinr AT hastietrevor anintroductiontostatisticallearningwithapplicationsinr AT tibshiranirobert anintroductiontostatisticallearningwithapplicationsinr |