An introduction to machine learning:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cham
Springer
[2021]
|
Ausgabe: | Third edition |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | xviii, 458 Seiten Illustrationen, Diagramme |
ISBN: | 9783030819347 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047515158 | ||
003 | DE-604 | ||
005 | 20220224 | ||
007 | t | ||
008 | 211018s2021 a||| |||| 00||| eng d | ||
020 | |a 9783030819347 |9 978-3-030-81934-7 | ||
035 | |a (OCoLC)1286864403 | ||
035 | |a (DE-599)BVBBV047515158 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-355 |a DE-11 |a DE-945 | ||
084 | |a ST 304 |0 (DE-625)143653: |2 rvk | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a DAT 708f |2 stub | ||
100 | 1 | |a Kubat, Miroslav |d 1958- |e Verfasser |0 (DE-588)1140076752 |4 aut | |
245 | 1 | 0 | |a An introduction to machine learning |c Miroslav Kubat |
250 | |a Third edition | ||
264 | 1 | |a Cham |b Springer |c [2021] | |
264 | 4 | |c © 2021 | |
300 | |a xviii, 458 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
650 | 0 | 7 | |a Informatik |0 (DE-588)4026894-9 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Informationssystem |0 (DE-588)4072806-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Computersimulation |0 (DE-588)4148259-1 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Informatik |0 (DE-588)4026894-9 |D s |
689 | 0 | 1 | |a Informationssystem |0 (DE-588)4072806-7 |D s |
689 | 0 | 2 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |D s |
689 | 0 | 3 | |a Computersimulation |0 (DE-588)4148259-1 |D s |
689 | 0 | 4 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |8 1\p |5 DE-604 | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-3-030-81935-4 |
780 | 0 | 0 | |i Vorangegangen ist |z 978-3-319-20009-5 |b First edition |d 2015 |w (DE-604)BV042755689 |
780 | 0 | 0 | |i Vorangegangen ist |z 978-3-319-63912-3 |b Second edition |d 2017 |w (DE-604)BV044451102 |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032915941&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-032915941 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk |
Datensatz im Suchindex
_version_ | 1804182856806694912 |
---|---|
adam_text | Contents 1 Ambitions and Goals of Machine Learning.......................................... 1.1 Training Sets and Classifiers........ ............................ !................... 1.2 Expected Benefits of the Induced Classifier.................................. 1.3 Problems with Available Data........................................................ 1.4 Many Roads to Concept Learning................................................ 1.5 Other Ambitions of Machine Learning......................................... 1.6 Summary and Historical Remarks................................................ 1.7 Solidity Your Knowledge............................................................... 1 1 4 6 9 12 13 14 2 Probabilities: Bayesian Classifiers........................................................... 2.1 The Single-Attribute Case.............................................................. 2.2 Vectors of Discrete Attributes........................................................ 2.3 Rare Events: An Expert’s Intuition............... 2.4 Continuous Attributes: Probability Density Functions................. 2.5 Gaussian “Bell” Function: A Standard pdf........................ 2.6 Approximating PDFs with Sets of Gaussian Functions............... 2.7 Summary and Historical Remarks........................................... 2.8 Solidify Your Knowledge............................................................... 17 17 21 25 28 31 33 36 38 3 Similarities: Nearest-Neighbor Classifiers............................................. 3.1 The k-Nearest-Neighbor
Rule................................................ 3.2 Measuring Similarity..................................................................... 3.3 Irrelevant Attributes and Scaling Problems................................... 3.4 Performance Considerations........................................................... 3.5 Weighted Nearest Neighbors........................... 3.6 Removing Dangerous Examples................................................... 3.7 Removing Redundant Examples................................................... 3.8 Limitations of Attribute-Vector Similarity.................................... 3.9 Summary and Historical Remarks................................................ 3.10 Solidify Your Knowledge............................................................... 41 41 44 47 50 53 55 57 60 61 62 ix
x Contents 4 Inter-Class Boundaries: Linear and Polynomial Classifiers............... 4.1 Essence.............................................................................................. 4.2 Additive Rule: Perceptron Learning.............................................. 4.3 Multiplicative Rule: WINNOW..................................................... 4.4 Domains with More Than Two Classes......................................... 4.5 Polynomial Classifiers................................................................. !. 4.6 Specific Aspects of Polynomial Classifiers................................... 4.7 Support Vector Machines................................................................ 4.8 Summary and Historical Remarks................................................. 4.9 Solidify Your Knowledge................................................................ 65 65 69 73 77 79 82 84 87 88 5 Decision Trees.............................................................................................. 5.1 Decision Trees as Classifiers.......................................................... 5.2 Indtiction of Decision Trees............................................................ 5.3 How Much Information in an Attribute?........................................ 5.4 Binary Split of a Numeric Attribute............................................... 5.5 Pruning............................................................................................. 5.6 Decision Tree Can Be Converted to Rules..................................... 5.7 Why Decision
Trees?................................. 5.8 Summary and Historical Remarks................................................. 5.9 Solidify Your Knowledge.................................... 91 91 94 97 101 103 108 110 112 113 6 Artificial Neural Networks ........................................................................ 6.1 Multilayer Perceptions..................................................................... 6.2 Neural Network’s Error................................................................... 6.3 Backpropagation of Error................................................................ 6.4 Practical Aspects of MLP’s..................................:......................... 6.5 Big Networks or Small?.................................................................. 6.6 Modern Approaches to MLP’s....................................................... 6.7 Radial Basis Function Networks.................................................... 6.8 Summary and Historical Remarks................................................. 6.9 Solidify Your Knowledge................................................................ 117 117 121 123 127 131 135 137 139 141 7 Computational Learning Theory............................................. 7.1 PAC Learning.................................................................................. 7.2 Examples of PAC-Learnability............................................. 7.3 Practical and Theoretical Consequences........................................ 7.4 VC-Dimension and
Learnability.................................................... 7.5 Summary and Historical Remarks............................................ 7.6 Exercises and Thought Experiments................ 145 145 148 151 153 156 158 8 Experience from Historical Applications................................................ 8.1 Medical Diagnosis........................................................................... 8.2 Character Recognition........................ 8.3 Oil-Spill Recognition...................................................................... 8.4 Sleep Classification......................................................................... 8.5 Brain-Computer Interface............................................................... 161 161 163 167 170 172
Contents 8.6 8.7 8.8 x Text Classification................................. ........................................ Summary and Historical Remarks................................................ Solidify Your Knowledge............................................................... 175 177 178 9 Voting Assemblies and Boosting.............................................................. 9.1 Bagging.................................................................. ......................... 9.2 Schapire’s Boosting......................................................................... 9.3 Adaboost: Practical Version of Boosting...................................... 9.4 Variations on the Boosting Theme................................................ 9.5 Cost-Saving Benefits of Boosting............... 9.6 Summary and Historical Remarks...... ....................................... 9.7 Solidify Your Knowledge............................................................... 181 181 184 187 191 193 195 195 10 Classifiers in the Form of Rule-Sets........................................................ 10.1 Class Described by Rules............................................................... 10.2 Inducing Rule-Sets by Sequential Covering................................. 10.3 Predicates and Recursion.......... ............................ 10.4 More Advanced Search Operators.............................................. 10.5 Summary and Historical Remarks................................................ 10.6 Solidify Your
Knowledge................................................................ 199 199 202 204 207 208 209 11 Practical Issues to Know About................................................................ 11.1 Learner’s Bias.............. ................................................................ 11.2 Imbalanced Training Sets................................................................ 11.3 Dealing with Imbalanced Classes................................................... 11.4 Context-Dependent Domains.......................................................... 11.5 Unknown Attribute Values............................................................ 11.6 Attribute Selection..................................... 11.7 Miscellaneous.................................................................................. 11.8 Summary and Historical Remarks......... ....................................... 11.9 Solidify Your Knowledge................................................................ 211 211 214 217 219 222 224 228 230 231 12 Performance Evaluation............................................................................ 12.1 Basic Performance Criteria............................................................ 12.2 Precision and Recall.............................................. 12.3 Other Ways to Measure Performance................................. 12.4 Learning Curves and Computational Costs................................... 12.5 Methodologies of Experimental Evaluation................................. 12.6 Experimental Blunders to
Avoid.................................................... 12.7 Summary and Historical Remarks............... 12.8 Solidify Your Knowledge................................................ 233 233 236 240 244 246 249 250 252 13 Statistical Significance.................. 13.1 Sampling a Population.................................................................... 13.2 Benefiting from the Normal Distribution...................................... 13.3 Confidence Intervals....................................................................... 13.4 Statistical Evaluation of a Classifier.............................................. 255 255 259 262 264
Contents xii 13.5 13.6 13.7 13.8 Another Use of Statistical Evaluation............................................ Comparing Machine-Learning Techniques................................... Summary and Historical Remarks................................................. Solidify Your Knowledge................................................................ 267 268 271 272 14 Induction in Multi-label Domains........................................................... 14.1 Classical Paradigms and Multi-label Data..................................... 14.2 Principle of Binary Relevance........................................................ 14.3 Classifier Chains.............................................................................. 14.4 Another Possibility: Stacking........................................................ 14.5 Note on Hierarchically Ordered Classes........................................ 14.6 Aggregating the Classes..................... 14.7 Criteria for Performance Evaluation.............................................. 14.8 Summary and Historical Remarks ...................................... 14.9 Solidify Your Knowledge................................................................ 275 275 278 280 282 284 287 289 292 293 15 Unsupervised Learning............................................................................. 15.1 Cluster Analysis.................. 15.2 Simple Clustering Algorithm: k-Means .,..................................... 15.3 Advanced Versions of k-Means............................ 15.4 Hierarchical
Aggregation............................................................... 15.5 Self-Organizing Feature Maps: Introduction...... ......................... 15.6 Some Details of SOFM................................................................... 15.7 Why Feature Maps?................ 15.8 Auto-Encoding................................................................................ 15.9 Why Auto-Encoding?...................................................................... 15.10 Summary and Historical Remarks...................... 15.11 Solidify Your Knowledge................................................................ 297 297 301 305 307 310 312 314 317 320 322 323 16 Deep Learning............................................................................................. 16.1 Digital Image: Many Low-Level Attributes.................................. 16.2 Convolution..................................................................................... 16.3 Pooling, ReLU, and Soft-Max......................................................... 16.4 Induction of CNNs........................................................................... 16.5 Advanced Issues.............................................................................. 16.6 CNN Is Just Another ML Paradigm................................................. 16.7 Word of Caution.............................................................................. 16.8 Summary and Historical Remarks................................................. 16.9 Solidify Your
Knowledge......................................................... 327 327 330 334 339 342 345 346 348 350 17 Reinforcement Learning: V-Armed Bandits and Episodes............ 17.1 Addressing the V-Armed Bandit Problem................................... 17.2 Additional Information................................................................... 17.3 Learning to Navigate a Maze............ ............................................. 17.4 Variations on the Episodic Theme............... !............................... 17.5 Car Races and Beyond.................................................................... 17.6 Practical Ideas.................................................................................. 353 353 356 361 365 368 371
Contents 17.7 17.8 ral Summary and Historical Remarks....... ........................................ Solidify Your Knowledge............................................................... 374 375 18 Reinforcement Learning: From TD(0) to Deep-Q-Learning.............. 18.1 Immediate Rewards: Temporal Difference................................... 18.2 SARSA and Q-Leaming................................................................. 18.3 Temporal Difference in Action...................................................... 18.4 Eligibility Traces: TD(k)............................................................... 18.5 Neural Network Replaces the LookupTable................................. 18.6 Reinforcement Learning in Game Playing................................... 18.7 Deep-Q-Leaming........................................................................... 18.8 Summary and Historical Remarks................................................ 18.9 Solidify Your Knowledge......................... 377 377 379 381 384 387 390 393 395 396 19 Temporal Learning.................................................................................... 19.1 Temporal Signals and Shift Registers............................................ 19.2 Recurrent Neural Networks...... ............................. 19.3 Long Short-Term Memory............................................................ 19.4 Summary and Historical Remarks................................................ 19.5 Solidify Your Knowledge............................................................... 399 399 403 405 406 407 20
Hidden Markov Models............................................................................. 20.1 Markov Processes............. 20.2 Revision: Probabilistic Calculations............................................. 20.3 HMM: Indirectly Observed States................................................. 20.4 Useful Probabilities: a, ß, and y................................................... 20.5 First Problem and Second Problem of HMM................................ 20.6 Third Problem of HMM............................ 20.7 Summary and Historical Remarks................... 20.8 Solidify Your Knowledge......................... 409 409 411 414 417 420 423 425 425 21 Genetic Algorithm...................................................................................... 21.1 Baseline Genetic Algorithm......... ................................................. 21.2 Implementing the Individual Functions......................................... 21.3 Why It Works......................................................... 21.4 Premature Degeneration...................................................... 21.5 Other Genetic Operators................................................................. 21.6 Advanced Versions.......................................................................... 21.7 Choices Made by к-NN Classifiers............................................... 21.8 Summary and Historical Remarks................................................ 21.9 Solidify Your Knowledge.................................................. 429 429 431 434 437 439 441 444 447 448
Bibliography............... 451 Index 457
|
adam_txt |
Contents 1 Ambitions and Goals of Machine Learning. 1.1 Training Sets and Classifiers. . !. 1.2 Expected Benefits of the Induced Classifier. 1.3 Problems with Available Data. 1.4 Many Roads to Concept Learning. 1.5 Other Ambitions of Machine Learning. 1.6 Summary and Historical Remarks. 1.7 Solidity Your Knowledge. 1 1 4 6 9 12 13 14 2 Probabilities: Bayesian Classifiers. 2.1 The Single-Attribute Case. 2.2 Vectors of Discrete Attributes. 2.3 Rare Events: An Expert’s Intuition. 2.4 Continuous Attributes: Probability Density Functions. 2.5 Gaussian “Bell” Function: A Standard pdf. 2.6 Approximating PDFs with Sets of Gaussian Functions. 2.7 Summary and Historical Remarks. 2.8 Solidify Your Knowledge. 17 17 21 25 28 31 33 36 38 3 Similarities: Nearest-Neighbor Classifiers. 3.1 The k-Nearest-Neighbor
Rule. 3.2 Measuring Similarity. 3.3 Irrelevant Attributes and Scaling Problems. 3.4 Performance Considerations. 3.5 Weighted Nearest Neighbors. 3.6 Removing Dangerous Examples. 3.7 Removing Redundant Examples. 3.8 Limitations of Attribute-Vector Similarity. 3.9 Summary and Historical Remarks. 3.10 Solidify Your Knowledge. 41 41 44 47 50 53 55 57 60 61 62 ix
x Contents 4 Inter-Class Boundaries: Linear and Polynomial Classifiers. 4.1 Essence. 4.2 Additive Rule: Perceptron Learning. 4.3 Multiplicative Rule: WINNOW. 4.4 Domains with More Than Two Classes. 4.5 Polynomial Classifiers. !. 4.6 Specific Aspects of Polynomial Classifiers. 4.7 Support Vector Machines. 4.8 Summary and Historical Remarks. 4.9 Solidify Your Knowledge. 65 65 69 73 77 79 82 84 87 88 5 Decision Trees. 5.1 Decision Trees as Classifiers. 5.2 Indtiction of Decision Trees. 5.3 How Much Information in an Attribute?. 5.4 Binary Split of a Numeric Attribute. 5.5 Pruning. 5.6 Decision Tree Can Be Converted to Rules. 5.7 Why Decision
Trees?. 5.8 Summary and Historical Remarks. 5.9 Solidify Your Knowledge. 91 91 94 97 101 103 108 110 112 113 6 Artificial Neural Networks . 6.1 Multilayer Perceptions. 6.2 Neural Network’s Error. 6.3 Backpropagation of Error. 6.4 Practical Aspects of MLP’s.:. 6.5 Big Networks or Small?. 6.6 Modern Approaches to MLP’s. 6.7 Radial Basis Function Networks. 6.8 Summary and Historical Remarks. 6.9 Solidify Your Knowledge. 117 117 121 123 127 131 135 137 139 141 7 Computational Learning Theory. 7.1 PAC Learning. 7.2 Examples of PAC-Learnability. 7.3 Practical and Theoretical Consequences. 7.4 VC-Dimension and
Learnability. 7.5 Summary and Historical Remarks. 7.6 Exercises and Thought Experiments. 145 145 148 151 153 156 158 8 Experience from Historical Applications. 8.1 Medical Diagnosis. 8.2 Character Recognition. 8.3 Oil-Spill Recognition. 8.4 Sleep Classification. 8.5 Brain-Computer Interface. 161 161 163 167 170 172
Contents 8.6 8.7 8.8 x Text Classification. . Summary and Historical Remarks. Solidify Your Knowledge. 175 177 178 9 Voting Assemblies and Boosting. 9.1 Bagging. . 9.2 Schapire’s Boosting. 9.3 Adaboost: Practical Version of Boosting. 9.4 Variations on the Boosting Theme. 9.5 Cost-Saving Benefits of Boosting. 9.6 Summary and Historical Remarks. . 9.7 Solidify Your Knowledge. 181 181 184 187 191 193 195 195 10 Classifiers in the Form of Rule-Sets. 10.1 Class Described by Rules. 10.2 Inducing Rule-Sets by Sequential Covering. 10.3 Predicates and Recursion. . 10.4 More Advanced Search Operators. 10.5 Summary and Historical Remarks. 10.6 Solidify Your
Knowledge. 199 199 202 204 207 208 209 11 Practical Issues to Know About. 11.1 Learner’s Bias. . 11.2 Imbalanced Training Sets. 11.3 Dealing with Imbalanced Classes. 11.4 Context-Dependent Domains. 11.5 Unknown Attribute Values. 11.6 Attribute Selection. 11.7 Miscellaneous. 11.8 Summary and Historical Remarks. . 11.9 Solidify Your Knowledge. 211 211 214 217 219 222 224 228 230 231 12 Performance Evaluation. 12.1 Basic Performance Criteria. 12.2 Precision and Recall. 12.3 Other Ways to Measure Performance. 12.4 Learning Curves and Computational Costs. 12.5 Methodologies of Experimental Evaluation. 12.6 Experimental Blunders to
Avoid. 12.7 Summary and Historical Remarks. 12.8 Solidify Your Knowledge. 233 233 236 240 244 246 249 250 252 13 Statistical Significance. 13.1 Sampling a Population. 13.2 Benefiting from the Normal Distribution. 13.3 Confidence Intervals. 13.4 Statistical Evaluation of a Classifier. 255 255 259 262 264
Contents xii 13.5 13.6 13.7 13.8 Another Use of Statistical Evaluation. Comparing Machine-Learning Techniques. Summary and Historical Remarks. Solidify Your Knowledge. 267 268 271 272 14 Induction in Multi-label Domains. 14.1 Classical Paradigms and Multi-label Data. 14.2 Principle of Binary Relevance. 14.3 Classifier Chains. 14.4 Another Possibility: Stacking. 14.5 Note on Hierarchically Ordered Classes. 14.6 Aggregating the Classes. 14.7 Criteria for Performance Evaluation. 14.8 Summary and Historical Remarks . 14.9 Solidify Your Knowledge. 275 275 278 280 282 284 287 289 292 293 15 Unsupervised Learning. 15.1 Cluster Analysis. 15.2 Simple Clustering Algorithm: k-Means .,. 15.3 Advanced Versions of k-Means. 15.4 Hierarchical
Aggregation. 15.5 Self-Organizing Feature Maps: Introduction. . 15.6 Some Details of SOFM. 15.7 Why Feature Maps?. 15.8 Auto-Encoding. 15.9 Why Auto-Encoding?. 15.10 Summary and Historical Remarks. 15.11 Solidify Your Knowledge. 297 297 301 305 307 310 312 314 317 320 322 323 16 Deep Learning. 16.1 Digital Image: Many Low-Level Attributes. 16.2 Convolution. 16.3 Pooling, ReLU, and Soft-Max. 16.4 Induction of CNNs. 16.5 Advanced Issues. 16.6 CNN Is Just Another ML Paradigm. 16.7 Word of Caution. 16.8 Summary and Historical Remarks. 16.9 Solidify Your
Knowledge. 327 327 330 334 339 342 345 346 348 350 17 Reinforcement Learning: V-Armed Bandits and Episodes. 17.1 Addressing the V-Armed Bandit Problem. 17.2 Additional Information. 17.3 Learning to Navigate a Maze. . 17.4 Variations on the Episodic Theme. !. 17.5 Car Races and Beyond. 17.6 Practical Ideas. 353 353 356 361 365 368 371
Contents 17.7 17.8 ral Summary and Historical Remarks. . Solidify Your Knowledge. 374 375 18 Reinforcement Learning: From TD(0) to Deep-Q-Learning. 18.1 Immediate Rewards: Temporal Difference. 18.2 SARSA and Q-Leaming. 18.3 Temporal Difference in Action. 18.4 Eligibility Traces: TD(k). 18.5 Neural Network Replaces the LookupTable. 18.6 Reinforcement Learning in Game Playing. 18.7 Deep-Q-Leaming. 18.8 Summary and Historical Remarks. 18.9 Solidify Your Knowledge. 377 377 379 381 384 387 390 393 395 396 19 Temporal Learning. 19.1 Temporal Signals and Shift Registers. 19.2 Recurrent Neural Networks. . 19.3 Long Short-Term Memory. 19.4 Summary and Historical Remarks. 19.5 Solidify Your Knowledge. 399 399 403 405 406 407 20
Hidden Markov Models. 20.1 Markov Processes. 20.2 Revision: Probabilistic Calculations. 20.3 HMM: Indirectly Observed States. 20.4 Useful Probabilities: a, ß, and y. 20.5 First Problem and Second Problem of HMM. 20.6 Third Problem of HMM. 20.7 Summary and Historical Remarks. 20.8 Solidify Your Knowledge. 409 409 411 414 417 420 423 425 425 21 Genetic Algorithm. 21.1 Baseline Genetic Algorithm. . 21.2 Implementing the Individual Functions. 21.3 Why It Works. 21.4 Premature Degeneration. 21.5 Other Genetic Operators. 21.6 Advanced Versions. 21.7 Choices Made by к-NN Classifiers. 21.8 Summary and Historical Remarks. 21.9 Solidify Your Knowledge. 429 429 431 434 437 439 441 444 447 448
Bibliography. 451 Index 457 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Kubat, Miroslav 1958- |
author_GND | (DE-588)1140076752 |
author_facet | Kubat, Miroslav 1958- |
author_role | aut |
author_sort | Kubat, Miroslav 1958- |
author_variant | m k mk |
building | Verbundindex |
bvnumber | BV047515158 |
classification_rvk | ST 304 ST 300 |
classification_tum | DAT 708f |
ctrlnum | (OCoLC)1286864403 (DE-599)BVBBV047515158 |
discipline | Informatik |
discipline_str_mv | Informatik |
edition | Third edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02229nam a2200505 c 4500</leader><controlfield tag="001">BV047515158</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220224 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">211018s2021 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783030819347</subfield><subfield code="9">978-3-030-81934-7</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1286864403</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV047515158</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-945</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 304</subfield><subfield code="0">(DE-625)143653:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Kubat, Miroslav</subfield><subfield code="d">1958-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1140076752</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An introduction to machine learning</subfield><subfield code="c">Miroslav Kubat</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Third edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham</subfield><subfield code="b">Springer</subfield><subfield code="c">[2021]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2021</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xviii, 458 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Informatik</subfield><subfield code="0">(DE-588)4026894-9</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Informationssystem</subfield><subfield code="0">(DE-588)4072806-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Computersimulation</subfield><subfield code="0">(DE-588)4148259-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Informatik</subfield><subfield code="0">(DE-588)4026894-9</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Informationssystem</subfield><subfield code="0">(DE-588)4072806-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Computersimulation</subfield><subfield code="0">(DE-588)4148259-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="4"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-3-030-81935-4</subfield></datafield><datafield tag="780" ind1="0" ind2="0"><subfield code="i">Vorangegangen ist</subfield><subfield code="z">978-3-319-20009-5</subfield><subfield code="b">First edition</subfield><subfield code="d">2015</subfield><subfield code="w">(DE-604)BV042755689</subfield></datafield><datafield tag="780" ind1="0" ind2="0"><subfield code="i">Vorangegangen ist</subfield><subfield code="z">978-3-319-63912-3</subfield><subfield code="b">Second edition</subfield><subfield code="d">2017</subfield><subfield code="w">(DE-604)BV044451102</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032915941&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032915941</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield></record></collection> |
id | DE-604.BV047515158 |
illustrated | Illustrated |
index_date | 2024-07-03T18:23:01Z |
indexdate | 2024-07-10T09:14:12Z |
institution | BVB |
isbn | 9783030819347 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032915941 |
oclc_num | 1286864403 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR DE-11 DE-945 |
owner_facet | DE-355 DE-BY-UBR DE-11 DE-945 |
physical | xviii, 458 Seiten Illustrationen, Diagramme |
publishDate | 2021 |
publishDateSearch | 2021 |
publishDateSort | 2021 |
publisher | Springer |
record_format | marc |
spelling | Kubat, Miroslav 1958- Verfasser (DE-588)1140076752 aut An introduction to machine learning Miroslav Kubat Third edition Cham Springer [2021] © 2021 xviii, 458 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier Informatik (DE-588)4026894-9 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Künstliche Intelligenz (DE-588)4033447-8 gnd rswk-swf Informationssystem (DE-588)4072806-7 gnd rswk-swf Computersimulation (DE-588)4148259-1 gnd rswk-swf Informatik (DE-588)4026894-9 s Informationssystem (DE-588)4072806-7 s Künstliche Intelligenz (DE-588)4033447-8 s Computersimulation (DE-588)4148259-1 s Maschinelles Lernen (DE-588)4193754-5 s 1\p DE-604 Erscheint auch als Online-Ausgabe 978-3-030-81935-4 Vorangegangen ist 978-3-319-20009-5 First edition 2015 (DE-604)BV042755689 Vorangegangen ist 978-3-319-63912-3 Second edition 2017 (DE-604)BV044451102 Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032915941&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Kubat, Miroslav 1958- An introduction to machine learning Informatik (DE-588)4026894-9 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd Informationssystem (DE-588)4072806-7 gnd Computersimulation (DE-588)4148259-1 gnd |
subject_GND | (DE-588)4026894-9 (DE-588)4193754-5 (DE-588)4033447-8 (DE-588)4072806-7 (DE-588)4148259-1 |
title | An introduction to machine learning |
title_auth | An introduction to machine learning |
title_exact_search | An introduction to machine learning |
title_exact_search_txtP | An introduction to machine learning |
title_full | An introduction to machine learning Miroslav Kubat |
title_fullStr | An introduction to machine learning Miroslav Kubat |
title_full_unstemmed | An introduction to machine learning Miroslav Kubat |
title_short | An introduction to machine learning |
title_sort | an introduction to machine learning |
topic | Informatik (DE-588)4026894-9 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd Informationssystem (DE-588)4072806-7 gnd Computersimulation (DE-588)4148259-1 gnd |
topic_facet | Informatik Maschinelles Lernen Künstliche Intelligenz Informationssystem Computersimulation |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032915941&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT kubatmiroslav anintroductiontomachinelearning |