Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: concepts, tools, and techniques to build intelligent systems
Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This bestselling book uses concrete exa...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Beijing ; Boston ; Farnham ; Sebastopol ; Tokyo
O'Reilly
2022
|
Ausgabe: | Third edition |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This bestselling book uses concrete examples, minimal theory, and production-ready Python frameworks (Scikit-Learn, Keras, and TensorFlow) to help you gain an intuitive understanding of the concepts and tools for building intelligent systems. With this updated third edition, author Aurélien Géron explores a range of techniques, starting with simple linear regression and progressing to deep neural networks. Numerous code examples and exercises throughout the book help you apply what you've learned. Programming experience is all you need to get started. |
Beschreibung: | Die 1. Auflage erschien 2017 unter dem Titel "Hands-on machine learning with Scikit-Learn and TensorFlow" Copyright-Datum: © 2023, Rückseite der Titelseite Hier auch später erschienene, unveränderte Nachdrucke |
Beschreibung: | xxv, 834 Seiten Illustrationen, Diagramme |
ISBN: | 9781098125974 1098125975 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV048368110 | ||
003 | DE-604 | ||
005 | 20240429 | ||
007 | t | ||
008 | 220720s2022 a||| |||| 00||| eng d | ||
020 | |a 9781098125974 |c Broschur : ca. EUR 79.50 (DE), US $ 79.99, CAN $ 99.99 |9 978-1-0981-2597-4 | ||
020 | |a 1098125975 |9 1-0981-2597-5 | ||
035 | |a (OCoLC)1350782021 | ||
035 | |a (DE-599)BVBBV048368110 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-1102 |a DE-862 |a DE-Aug4 |a DE-860 |a DE-573 |a DE-19 |a DE-1043 |a DE-20 |a DE-29T |a DE-945 |a DE-703 |a DE-188 |a DE-355 |a DE-861 |a DE-11 |a DE-859 |a DE-91G | ||
082 | 0 | |a 005.133 |2 23/ger | |
082 | 0 | |a 006.31 |2 23/ger | |
084 | |a QH 740 |0 (DE-625)141614: |2 rvk | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a ST 302 |0 (DE-625)143652: |2 rvk | ||
084 | |a ST 304 |0 (DE-625)143653: |2 rvk | ||
084 | |a ST 250 |0 (DE-625)143626: |2 rvk | ||
084 | |a DAT 316 |2 stub | ||
084 | |a 004 |2 sdnb | ||
084 | |a DAT 708 |2 stub | ||
100 | 1 | |a Géron, Aurélien |e Verfasser |0 (DE-588)1131560930 |4 aut | |
240 | 1 | 0 | |a Hands-on machine learning with Scikit-Learn and TensorFlow |
245 | 1 | 0 | |a Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow |b concepts, tools, and techniques to build intelligent systems |c Aurélien Géron |
246 | 1 | 3 | |a Hands-on machine learning with Scikit-Learn, Keras & TensorFlow |
250 | |a Third edition | ||
264 | 1 | |a Beijing ; Boston ; Farnham ; Sebastopol ; Tokyo |b O'Reilly |c 2022 | |
264 | 4 | |c © 2023 | |
300 | |a xxv, 834 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Die 1. Auflage erschien 2017 unter dem Titel "Hands-on machine learning with Scikit-Learn and TensorFlow" | ||
500 | |a Copyright-Datum: © 2023, Rückseite der Titelseite | ||
500 | |a Hier auch später erschienene, unveränderte Nachdrucke | ||
505 | 8 | |a Use Scikit-learn to track an example ML project end to end Explore several models, including support vector machines, decision trees, random forests, and ensemble methods Exploit unsupervised learning techniques such as dimensionality reduction, clustering, and anomaly detection Dive into neural net architectures, including convolutional nets, recurrent nets, generative adversarial networks, autoencoders, diffusion models, and transformers Use TensorFlow and Keras to build and train neural nets for computer vision, natural language processing, generative models, and deep reinforcement learning | |
520 | 3 | |a Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This bestselling book uses concrete examples, minimal theory, and production-ready Python frameworks (Scikit-Learn, Keras, and TensorFlow) to help you gain an intuitive understanding of the concepts and tools for building intelligent systems. With this updated third edition, author Aurélien Géron explores a range of techniques, starting with simple linear regression and progressing to deep neural networks. Numerous code examples and exercises throughout the book help you apply what you've learned. Programming experience is all you need to get started. | |
650 | 0 | 7 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Programmbibliothek |0 (DE-588)4121521-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Keras |g Framework, Informatik |0 (DE-588)1160521077 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a TensorFlow |0 (DE-588)1153577011 |2 gnd |9 rswk-swf |
653 | |a ChatGPT | ||
653 | |a DALL-E | ||
689 | 0 | 0 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |D s |
689 | 0 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 2 | |a Programmbibliothek |0 (DE-588)4121521-7 |D s |
689 | 0 | 3 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |D s |
689 | 0 | 4 | |a Keras |g Framework, Informatik |0 (DE-588)1160521077 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 1 | 1 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 1 | 2 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |D s |
689 | 1 | 3 | |a Keras |g Framework, Informatik |0 (DE-588)1160521077 |D s |
689 | 1 | 4 | |a TensorFlow |0 (DE-588)1153577011 |D s |
689 | 1 | |5 DE-604 | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-1-098-12246-1 |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-1-098-12596-7 |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033747193&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-033747193 |
Datensatz im Suchindex
DE-BY-862_location | 2000 |
---|---|
DE-BY-FWS_call_number | 2000/ST 302 G377(3) |
DE-BY-FWS_katkey | 1011978 |
DE-BY-FWS_media_number | 083000521737 083000521736 |
_version_ | 1806177151959957504 |
adam_text | Table of Contents Preface........................................................................................................ xv Part I. The Fundamentals of Machine Learning 1. The Machine Learning Landscape.................................................................. 3 What Is Machine Learning? Why Use Machine Learning? Examples of Applications Types of Machine Learning Systems Training Supervision Batch Versus Online Learning Instance-Based Versus Model-Based Learning Main Challenges of Machine Learning Insufficient Quantity of Training Data Nonrepresentative Training Data Poor-Quality Data Irrelevant Features Overfitting the Training Data Underfitting the Training Data Stepping Back Testing and Validating Hyperparameter Tuning and Model Selection Data Mismatch Exercises 4 5 8 9 10 17 21 27 27 28 30 30 30 33 33 34 34 35 37 iii
2. End-to-End Machine Learning Projed..................................................... 39 Working with Real Data Look at the Big Picture Frame the Problem Select a Performance Measure Check the Assumptions Get the Data Running the Code Examples Using Google Colab Saving Your Code Changes and Your Data The Power and Danger of Interactivity Book Code Versus Notebook Code Download the Data Take a Quick Look at the Data Structure Create a Test Set Explore and Visualize the Data to Gain Insights Visualizing Geographical Data Look for Correlations Experiment with Attribute Combinations Prepare the Data for Machine Learning Algorithms Clean the Data Handling Text and Categorical Attributes Feature Scaling and Transformation Custom Transformers Transformation Pipelines Select and Train a Model Train and Evaluate on the Training Set Better Evaluation Using Cross-Validation Fine-Tune Your Model Grid Search Randomized Search Ensemble Methods Analyzing the Best Models and Their Errors Evaluate Your System on the Test Set Launch, Monitor, and Maintain Your System Try It Out! Exercises 39 41 41 43 46 46 46 48 49 50 50 51 55 60 61 63 66 67 68 71 75 79 83 88 88 89 91 91 93 95 95 96 97 100 101 3. Classification................................................................................. 103 MNIST Training a Binary Classifier Performance Measures iv I Table of Contents 103 106 107
Measuring Accuracy Using Cross-Validation Confusion Matrices Precision and Recall The Precision/Recall Trade-off The ROC Curve Multiclass Classification Error Analysis Multilabel Classification Multioutput Classification Exercises 107 108 110 111 115 119 122 125 127 129 4. Training Models..................................................................................... 131 132 134 137 138 142 145 148 149 151 155 156 158 161 162 164 164 165 167 170 173 Linear Regression The Normal Equation Computational Complexity Gradient Descent Batch Gradient Descent Stochastic Gradient Descent Mini-Batch Gradient Descent Polynomial Regression Learning Curves Regularized Linear Models Ridge Regression Lasso Regression Elastic Net Regression Early Stopping Logistic Regression Estimating Probabilities Training and Cost Function Decision Boundaries Softmax Regression Exercises 5. Support Vector Machines....................................................................... 175 Linear SVM Classification Soft Margin Classification Nonlinear SVM Classification Polynomial Kernel Similarity Features Gaussian RBF Kernel SVM Classes and Computational Complexity 175 176 178 180 181 181 183 Table of Contents | v
SVM Regression Under the Hood of Linear SVM Classifiers The Dual Problem Kernelized SVMs Exercises 184 186 189 190 193 6. Decision Trees...................................................................................... 195 Training and Visualizing a Decision Tree Making Predictions Estimating Class Probabilities The CART Training Algorithm Computational Complexity Gini Impurity or Entropy? Regularization Hyperparameters Regression Sensitivity to Axis Orientation Decision Trees Have a High Variance Exercises 195 197 199 199 200 201 201 204 206 207 208 7. Ensemble Learning and Random Forests...................................................... 211 Voting Classifiers Bagging and Pasting Bagging and Pasting in Scikit-Learn Out-of-Bag Evaluation Random Patches and Random Subspaces Random Forests Extra-Trees Feature Importance Boosting AdaBoost Gradient Boosting Histogram-Based Gradient Boosting Stacking Exercises 212 215 217 218 219 220 220 221 222 222 226 230 232 235 8. Dimensionality Reduction...................................................................... 237 The Curse of Dimensionality Main Approaches for Dimensionality Reduction Projection Manifold Learning PCA vi I Table of Contents 238 239 239 241 243
Preserving the Variance Principal Components Projecting Down to d Dimensions Using Scikit-Learn Explained Variance Ratio Choosing the Right Number of Dimensions PCA for Compression Randomized PCA Incremental PCA Random Projection LLE Other Dimensionality Reduction Techniques Exercises 243 244 245 246 246 247 249 250 250 252 254 256 257 9. Unsupervised Learning Techniques.......................................................... 259 Clustering Algorithms: к-means and DBSCAN k-means Limits of k-means Using Clustering for Image Segmentation Using Clustering for Semi-Supervised Learning DBSCAN Other Clustering Algorithms Gaussian Mixtures Using Gaussian Mixtures for Anomaly Detection Selecting the Number of Clusters Bayesian Gaussian Mixture Models Other Algorithms for Anomaly and Novelty Detection Exercises Part II. 260 263 272 273 275 279 282 283 288 289 292 293 294 Neural Networks and Deep Learning 10. Introduction to Artificial Neural Networks with Keras...................................... 299 From Biological to Artificial Neurons Biological Neurons Logical Computations with Neurons The Perceptron The Multilayer Perceptron and Backpropagation Regression MLPs Classification MLPs Implementing MLPs with Keras 300 301 303 304 309 313 315 317 Table of Contents | vii
Building an Image Classifier Using the Sequential API Building a Regression MLP Using the Sequential API Building Complex Models Using the Functional API Using the Subclassing API to Build Dynamic Models Saving and Restoring a Model Using Callbacks Using TensorBoard for Visualization Fine-Tuning Neural Network Hyperparameters Number of Hidden Layers Number of Neurons per Hidden Layer Learning Rate, Batch Size, and Other Hyperparameters Exercises 318 328 329 336 337 338 340 344 349 350 351 353 IL Training Deep Neural Networks.......................................................... 357 The Vanishing/Exploding Gradients Problems Glorot and He Initialization Better Activation Functions Batch Normalization Gradient Clipping Reusing Pretrained Layers Transfer Learning with Keras Unsupervised Pretraining Pretraining on an Auxiliary Task Faster Optimizers Momentum Nesterov Accelerated Gradient AdaGrad RMSProp Adam AdaMax Nadam AdamW Learning Rate Scheduling Avoiding Overfitting Through Regularization fj and €2 Regularization Dropout Monte Carlo (MC) Dropout Max-Norm Regularization Summary and Practical Guidelines Exercises viii j Table of Contents 358 359 361 367 372 373 375 377 378 379 379 381 382 383 384 385 386 386 388 392 393 394 397 399 400 402
12. Custom Models and Training with TensorFlow.... 00000000000000 A Quick Tour of TensorFlow Using TensorFlow like NumPy Tensors and Operations Tensors and NumPy Type Conversions Variables Other Data Structures Customizing Models and Training Algorithms Custom Loss Functions Saving and Loading Models That Contain Custom Components Custom Activation Functions, Initializers, Regularizers, and Constraints Custom Metrics Custom Layers Custom Models Losses and Metrics Based on Model Internals Computing Gradients Using Autodiff Custom Training Loops TensorFlow Functions and Graphs AutoGraph and Tracing TF Function Rules Exercises 403 403 407 407 409 409 410 410 412 412 413 415 416 419 422 424 426 430 433 435 437 438 13. Loading and Preprocessing Data with TensorFlow....................... 441 The tf.data API Chaining Transformations Shuffling the Data Interleaving Lines from Multiple Files Preprocessing the Data Putting Everything Together Prefetching Using the Dataset with Keras The TFRecord Format Compressed TFRecord Files A Brief Introduction to Protocol Buffers TensorFlow Protobufs Loading and Parsing Examples Handling Lists of Lists Using the SequenceExample Protobuf Keras Preprocessing Layers The Normalization Layer The Discretization Layer 442 443 445 446 448 449 450 452 453 454 454 456 457 459 459 460 463 Table of Contents ļ ix
The CategoryEncoding Layer The StringLookup Layer The Hashing Layer Encoding Categorical Features Using Embeddings Text Preprocessing Using Pretrained Language Model Components Image Preprocessing Layers The TensorFlow Datasets Project Exercises 463 465 466 466 471 473 474 475 477 14. Deep Computer Vision Using Convolutional Neural Networks........................ 479 The Architecture of the Visual Cortex Convolutional Layers Filters Stacking Multiple Feature Maps Implementing Convolutional Layers with Keras Memory Requirements Pooling Layers Implementing Pooling Layers with Keras CNN Architectures LeNet-5 AlexNet GoogLeNet VGGNet ResNet Xception SENet Other Noteworthy Architectures Choosing the Right CNN Architecture Implementing a ResNet- 34 CNN Using Keras Using Pretrained Models from Keras Pretrained Models for Transfer Learning Classification and Localization Object Detection Fully Convolutional Networks You Only Look Once Object Tracking Semantic Segmentation Exercises x I Table of Contents 480 481 484 485 487 490 491 493 495 498 499 502 505 505 509 510 512 514 515 516 518 521 523 525 527 530 531 535
15. Processing Sequences Using RNNs and CNNs............................................... 537 Recurrent Neurons and Layers Memory Cells Input and Output Sequences Training RNNs Forecasting a Time Series The ARMA Model Family Preparing the Data for Machine Learning Models Forecasting Using a Linear Model Forecasting Using a Simple RNN Forecasting Using a Deep RNN Forecasting Multivariate Time Series Forecasting Several Time Steps Ahead Forecasting Using a Sequence-to-Sequence Model Handling Long Sequences Fighting the Unstable Gradients Problem Tackling the Short-Term Memory Problem Exercises 538 540 541 542 543 549 552 555 556 557 559 560 562 565 565 568 576 16. Natural Language Processing with RNNs and Attention................................... 577 Generating Shakespearean Text Using a Character RNN Creating the Training Dataset Building and Training the Char-RNN Model Generating Fake Shakespearean Text Stateful RNN Sentiment Analysis Masking Reusing Pretrained Embeddings and Language Models An Encoder-Decoder Network for Neural Machine Translation Bidirectional RNNs Beam Search Attention Mechanisms Attention Is All You Need: The Original Transformer Architecture An Avalanche of Transformer Models Vision Transformers Hugging Face’s Transformers Library Exercises 578 579 581 582 584 587 590 593 595 601 603 604 609 620 624 629 633 17. Autoencoders, GANs, and Diffusion Models................................................... 635 Efficient Data Representations Performing PCA with an Undercomplete Linear Autoencoder 637 639 Table of Contents | xi
Stacked Autoencoders Implementing a Stacked Autoencoder Using Keras Visualizing the Reconstructions Visualizing the Fashion MNIST Dataset Unsupervised Pretraining Using Stacked Autoencoders Tying Weights Training One Autoencoder at a Time Convolutional Autoencoders Denoising Autoencoders Sparse Autoencoders Variational Autoencoders Generating Fashion MNIST Images Generative Adversarial Networks The Difficulties of Training GANs Deep Convolutional GANs Progressive Growing of GANs StyleGANs Diffusion Models Exercises 640 641 642 643 644 645 646 648 649 651 654 658 659 663 665 668 671 673 681 18. Reinforcement Learning.................................................................. 683 Learning to Optimize Rewards Policy Search Introduction to OpenAI Gym Neural Network Policies Evaluating Actions: The Credit Assignment Problem Policy Gradients Markov Decision Processes Temporal Difference Learning Q-Learning Exploration Policies Approximate Q-Learning and Deep Q-Learning Implementing Deep Q-Learning Deep Q-Learning Variants Fixed Q-value Targets Double DQN Prioritized Experience Replay Dueling DQN Overview of Some Popular RL Algorithms Exercises xii I Table of Contents 684 685 687 691 693 694 699 703 704 706 707 708 713 713 714 714 715 716 720
19. Training and Deploying TensorFlow Models at Scale........................................ 721 Serving a TensorFlow Model Using TensorFlow Serving Creating a Prediction Service on Vertex AI Running Batch Prediction Jobs on Vertex AI Deploying a Model to a Mobile or Embedded Device Running a Model in a Web Page Using GPUs to Speed Up Computations Getting Your Own GPU Managing the GPU RAM Placing Operations and Variables on Devices Parallel Execution Across Multiple Devices Training Models Across Multiple Devices Model Parallelism Data Parallelism Training at Scale Using the Distribution Strategies API Training a Model on a TensorFlow Cluster Running Large Training Jobs on Vertex AI Hyperparameter Tuning on Vertex AI Exercises Thank You! 722 722 732 739 741 744 746 747 749 752 753 756 756 759 765 766 770 772 776 777 A. Machine Learning Project Checklist............................................................... 779 B. Autodiff.............................................................................................. 785 C. Special Data Structures............................................................................ 793 D. TensorFlow Graphs................................................................................ 801 Index...................................................................................................... 811 Table of Contents | xiii
|
adam_txt |
Table of Contents Preface. xv Part I. The Fundamentals of Machine Learning 1. The Machine Learning Landscape. 3 What Is Machine Learning? Why Use Machine Learning? Examples of Applications Types of Machine Learning Systems Training Supervision Batch Versus Online Learning Instance-Based Versus Model-Based Learning Main Challenges of Machine Learning Insufficient Quantity of Training Data Nonrepresentative Training Data Poor-Quality Data Irrelevant Features Overfitting the Training Data Underfitting the Training Data Stepping Back Testing and Validating Hyperparameter Tuning and Model Selection Data Mismatch Exercises 4 5 8 9 10 17 21 27 27 28 30 30 30 33 33 34 34 35 37 iii
2. End-to-End Machine Learning Projed. 39 Working with Real Data Look at the Big Picture Frame the Problem Select a Performance Measure Check the Assumptions Get the Data Running the Code Examples Using Google Colab Saving Your Code Changes and Your Data The Power and Danger of Interactivity Book Code Versus Notebook Code Download the Data Take a Quick Look at the Data Structure Create a Test Set Explore and Visualize the Data to Gain Insights Visualizing Geographical Data Look for Correlations Experiment with Attribute Combinations Prepare the Data for Machine Learning Algorithms Clean the Data Handling Text and Categorical Attributes Feature Scaling and Transformation Custom Transformers Transformation Pipelines Select and Train a Model Train and Evaluate on the Training Set Better Evaluation Using Cross-Validation Fine-Tune Your Model Grid Search Randomized Search Ensemble Methods Analyzing the Best Models and Their Errors Evaluate Your System on the Test Set Launch, Monitor, and Maintain Your System Try It Out! Exercises 39 41 41 43 46 46 46 48 49 50 50 51 55 60 61 63 66 67 68 71 75 79 83 88 88 89 91 91 93 95 95 96 97 100 101 3. Classification. 103 MNIST Training a Binary Classifier Performance Measures iv I Table of Contents 103 106 107
Measuring Accuracy Using Cross-Validation Confusion Matrices Precision and Recall The Precision/Recall Trade-off The ROC Curve Multiclass Classification Error Analysis Multilabel Classification Multioutput Classification Exercises 107 108 110 111 115 119 122 125 127 129 4. Training Models. 131 132 134 137 138 142 145 148 149 151 155 156 158 161 162 164 164 165 167 170 173 Linear Regression The Normal Equation Computational Complexity Gradient Descent Batch Gradient Descent Stochastic Gradient Descent Mini-Batch Gradient Descent Polynomial Regression Learning Curves Regularized Linear Models Ridge Regression Lasso Regression Elastic Net Regression Early Stopping Logistic Regression Estimating Probabilities Training and Cost Function Decision Boundaries Softmax Regression Exercises 5. Support Vector Machines. 175 Linear SVM Classification Soft Margin Classification Nonlinear SVM Classification Polynomial Kernel Similarity Features Gaussian RBF Kernel SVM Classes and Computational Complexity 175 176 178 180 181 181 183 Table of Contents | v
SVM Regression Under the Hood of Linear SVM Classifiers The Dual Problem Kernelized SVMs Exercises 184 186 189 190 193 6. Decision Trees. 195 Training and Visualizing a Decision Tree Making Predictions Estimating Class Probabilities The CART Training Algorithm Computational Complexity Gini Impurity or Entropy? Regularization Hyperparameters Regression Sensitivity to Axis Orientation Decision Trees Have a High Variance Exercises 195 197 199 199 200 201 201 204 206 207 208 7. Ensemble Learning and Random Forests. 211 Voting Classifiers Bagging and Pasting Bagging and Pasting in Scikit-Learn Out-of-Bag Evaluation Random Patches and Random Subspaces Random Forests Extra-Trees Feature Importance Boosting AdaBoost Gradient Boosting Histogram-Based Gradient Boosting Stacking Exercises 212 215 217 218 219 220 220 221 222 222 226 230 232 235 8. Dimensionality Reduction. 237 The Curse of Dimensionality Main Approaches for Dimensionality Reduction Projection Manifold Learning PCA vi I Table of Contents 238 239 239 241 243
Preserving the Variance Principal Components Projecting Down to d Dimensions Using Scikit-Learn Explained Variance Ratio Choosing the Right Number of Dimensions PCA for Compression Randomized PCA Incremental PCA Random Projection LLE Other Dimensionality Reduction Techniques Exercises 243 244 245 246 246 247 249 250 250 252 254 256 257 9. Unsupervised Learning Techniques. 259 Clustering Algorithms: к-means and DBSCAN k-means Limits of k-means Using Clustering for Image Segmentation Using Clustering for Semi-Supervised Learning DBSCAN Other Clustering Algorithms Gaussian Mixtures Using Gaussian Mixtures for Anomaly Detection Selecting the Number of Clusters Bayesian Gaussian Mixture Models Other Algorithms for Anomaly and Novelty Detection Exercises Part II. 260 263 272 273 275 279 282 283 288 289 292 293 294 Neural Networks and Deep Learning 10. Introduction to Artificial Neural Networks with Keras. 299 From Biological to Artificial Neurons Biological Neurons Logical Computations with Neurons The Perceptron The Multilayer Perceptron and Backpropagation Regression MLPs Classification MLPs Implementing MLPs with Keras 300 301 303 304 309 313 315 317 Table of Contents | vii
Building an Image Classifier Using the Sequential API Building a Regression MLP Using the Sequential API Building Complex Models Using the Functional API Using the Subclassing API to Build Dynamic Models Saving and Restoring a Model Using Callbacks Using TensorBoard for Visualization Fine-Tuning Neural Network Hyperparameters Number of Hidden Layers Number of Neurons per Hidden Layer Learning Rate, Batch Size, and Other Hyperparameters Exercises 318 328 329 336 337 338 340 344 349 350 351 353 IL Training Deep Neural Networks. 357 The Vanishing/Exploding Gradients Problems Glorot and He Initialization Better Activation Functions Batch Normalization Gradient Clipping Reusing Pretrained Layers Transfer Learning with Keras Unsupervised Pretraining Pretraining on an Auxiliary Task Faster Optimizers Momentum Nesterov Accelerated Gradient AdaGrad RMSProp Adam AdaMax Nadam AdamW Learning Rate Scheduling Avoiding Overfitting Through Regularization fj and €2 Regularization Dropout Monte Carlo (MC) Dropout Max-Norm Regularization Summary and Practical Guidelines Exercises viii j Table of Contents 358 359 361 367 372 373 375 377 378 379 379 381 382 383 384 385 386 386 388 392 393 394 397 399 400 402
12. Custom Models and Training with TensorFlow. 00000000000000 A Quick Tour of TensorFlow Using TensorFlow like NumPy Tensors and Operations Tensors and NumPy Type Conversions Variables Other Data Structures Customizing Models and Training Algorithms Custom Loss Functions Saving and Loading Models That Contain Custom Components Custom Activation Functions, Initializers, Regularizers, and Constraints Custom Metrics Custom Layers Custom Models Losses and Metrics Based on Model Internals Computing Gradients Using Autodiff Custom Training Loops TensorFlow Functions and Graphs AutoGraph and Tracing TF Function Rules Exercises 403 403 407 407 409 409 410 410 412 412 413 415 416 419 422 424 426 430 433 435 437 438 13. Loading and Preprocessing Data with TensorFlow. 441 The tf.data API Chaining Transformations Shuffling the Data Interleaving Lines from Multiple Files Preprocessing the Data Putting Everything Together Prefetching Using the Dataset with Keras The TFRecord Format Compressed TFRecord Files A Brief Introduction to Protocol Buffers TensorFlow Protobufs Loading and Parsing Examples Handling Lists of Lists Using the SequenceExample Protobuf Keras Preprocessing Layers The Normalization Layer The Discretization Layer 442 443 445 446 448 449 450 452 453 454 454 456 457 459 459 460 463 Table of Contents ļ ix
The CategoryEncoding Layer The StringLookup Layer The Hashing Layer Encoding Categorical Features Using Embeddings Text Preprocessing Using Pretrained Language Model Components Image Preprocessing Layers The TensorFlow Datasets Project Exercises 463 465 466 466 471 473 474 475 477 14. Deep Computer Vision Using Convolutional Neural Networks. 479 The Architecture of the Visual Cortex Convolutional Layers Filters Stacking Multiple Feature Maps Implementing Convolutional Layers with Keras Memory Requirements Pooling Layers Implementing Pooling Layers with Keras CNN Architectures LeNet-5 AlexNet GoogLeNet VGGNet ResNet Xception SENet Other Noteworthy Architectures Choosing the Right CNN Architecture Implementing a ResNet- 34 CNN Using Keras Using Pretrained Models from Keras Pretrained Models for Transfer Learning Classification and Localization Object Detection Fully Convolutional Networks You Only Look Once Object Tracking Semantic Segmentation Exercises x I Table of Contents 480 481 484 485 487 490 491 493 495 498 499 502 505 505 509 510 512 514 515 516 518 521 523 525 527 530 531 535
15. Processing Sequences Using RNNs and CNNs. 537 Recurrent Neurons and Layers Memory Cells Input and Output Sequences Training RNNs Forecasting a Time Series The ARMA Model Family Preparing the Data for Machine Learning Models Forecasting Using a Linear Model Forecasting Using a Simple RNN Forecasting Using a Deep RNN Forecasting Multivariate Time Series Forecasting Several Time Steps Ahead Forecasting Using a Sequence-to-Sequence Model Handling Long Sequences Fighting the Unstable Gradients Problem Tackling the Short-Term Memory Problem Exercises 538 540 541 542 543 549 552 555 556 557 559 560 562 565 565 568 576 16. Natural Language Processing with RNNs and Attention. 577 Generating Shakespearean Text Using a Character RNN Creating the Training Dataset Building and Training the Char-RNN Model Generating Fake Shakespearean Text Stateful RNN Sentiment Analysis Masking Reusing Pretrained Embeddings and Language Models An Encoder-Decoder Network for Neural Machine Translation Bidirectional RNNs Beam Search Attention Mechanisms Attention Is All You Need: The Original Transformer Architecture An Avalanche of Transformer Models Vision Transformers Hugging Face’s Transformers Library Exercises 578 579 581 582 584 587 590 593 595 601 603 604 609 620 624 629 633 17. Autoencoders, GANs, and Diffusion Models. 635 Efficient Data Representations Performing PCA with an Undercomplete Linear Autoencoder 637 639 Table of Contents | xi
Stacked Autoencoders Implementing a Stacked Autoencoder Using Keras Visualizing the Reconstructions Visualizing the Fashion MNIST Dataset Unsupervised Pretraining Using Stacked Autoencoders Tying Weights Training One Autoencoder at a Time Convolutional Autoencoders Denoising Autoencoders Sparse Autoencoders Variational Autoencoders Generating Fashion MNIST Images Generative Adversarial Networks The Difficulties of Training GANs Deep Convolutional GANs Progressive Growing of GANs StyleGANs Diffusion Models Exercises 640 641 642 643 644 645 646 648 649 651 654 658 659 663 665 668 671 673 681 18. Reinforcement Learning. 683 Learning to Optimize Rewards Policy Search Introduction to OpenAI Gym Neural Network Policies Evaluating Actions: The Credit Assignment Problem Policy Gradients Markov Decision Processes Temporal Difference Learning Q-Learning Exploration Policies Approximate Q-Learning and Deep Q-Learning Implementing Deep Q-Learning Deep Q-Learning Variants Fixed Q-value Targets Double DQN Prioritized Experience Replay Dueling DQN Overview of Some Popular RL Algorithms Exercises xii I Table of Contents 684 685 687 691 693 694 699 703 704 706 707 708 713 713 714 714 715 716 720
19. Training and Deploying TensorFlow Models at Scale. 721 Serving a TensorFlow Model Using TensorFlow Serving Creating a Prediction Service on Vertex AI Running Batch Prediction Jobs on Vertex AI Deploying a Model to a Mobile or Embedded Device Running a Model in a Web Page Using GPUs to Speed Up Computations Getting Your Own GPU Managing the GPU RAM Placing Operations and Variables on Devices Parallel Execution Across Multiple Devices Training Models Across Multiple Devices Model Parallelism Data Parallelism Training at Scale Using the Distribution Strategies API Training a Model on a TensorFlow Cluster Running Large Training Jobs on Vertex AI Hyperparameter Tuning on Vertex AI Exercises Thank You! 722 722 732 739 741 744 746 747 749 752 753 756 756 759 765 766 770 772 776 777 A. Machine Learning Project Checklist. 779 B. Autodiff. 785 C. Special Data Structures. 793 D. TensorFlow Graphs. 801 Index. 811 Table of Contents | xiii |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Géron, Aurélien |
author_GND | (DE-588)1131560930 |
author_facet | Géron, Aurélien |
author_role | aut |
author_sort | Géron, Aurélien |
author_variant | a g ag |
building | Verbundindex |
bvnumber | BV048368110 |
classification_rvk | QH 740 ST 300 ST 302 ST 304 ST 250 |
classification_tum | DAT 316 DAT 708 |
contents | Use Scikit-learn to track an example ML project end to end Explore several models, including support vector machines, decision trees, random forests, and ensemble methods Exploit unsupervised learning techniques such as dimensionality reduction, clustering, and anomaly detection Dive into neural net architectures, including convolutional nets, recurrent nets, generative adversarial networks, autoencoders, diffusion models, and transformers Use TensorFlow and Keras to build and train neural nets for computer vision, natural language processing, generative models, and deep reinforcement learning |
ctrlnum | (OCoLC)1350782021 (DE-599)BVBBV048368110 |
dewey-full | 005.133 006.31 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 005 - Computer programming, programs, data, security 006 - Special computer methods |
dewey-raw | 005.133 006.31 |
dewey-search | 005.133 006.31 |
dewey-sort | 15.133 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik Wirtschaftswissenschaften |
discipline_str_mv | Informatik Wirtschaftswissenschaften |
edition | Third edition |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>05024nam a2200781 c 4500</leader><controlfield tag="001">BV048368110</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240429 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">220720s2022 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781098125974</subfield><subfield code="c">Broschur : ca. EUR 79.50 (DE), US $ 79.99, CAN $ 99.99</subfield><subfield code="9">978-1-0981-2597-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1098125975</subfield><subfield code="9">1-0981-2597-5</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1350782021</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV048368110</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-1102</subfield><subfield code="a">DE-862</subfield><subfield code="a">DE-Aug4</subfield><subfield code="a">DE-860</subfield><subfield code="a">DE-573</subfield><subfield code="a">DE-19</subfield><subfield code="a">DE-1043</subfield><subfield code="a">DE-20</subfield><subfield code="a">DE-29T</subfield><subfield code="a">DE-945</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-188</subfield><subfield code="a">DE-355</subfield><subfield code="a">DE-861</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-859</subfield><subfield code="a">DE-91G</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">005.133</subfield><subfield code="2">23/ger</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield><subfield code="2">23/ger</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 740</subfield><subfield code="0">(DE-625)141614:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 302</subfield><subfield code="0">(DE-625)143652:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 304</subfield><subfield code="0">(DE-625)143653:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 250</subfield><subfield code="0">(DE-625)143626:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 316</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">004</subfield><subfield code="2">sdnb</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Géron, Aurélien</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1131560930</subfield><subfield code="4">aut</subfield></datafield><datafield tag="240" ind1="1" ind2="0"><subfield code="a">Hands-on machine learning with Scikit-Learn and TensorFlow</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow</subfield><subfield code="b">concepts, tools, and techniques to build intelligent systems</subfield><subfield code="c">Aurélien Géron</subfield></datafield><datafield tag="246" ind1="1" ind2="3"><subfield code="a">Hands-on machine learning with Scikit-Learn, Keras & TensorFlow</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Third edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Beijing ; Boston ; Farnham ; Sebastopol ; Tokyo</subfield><subfield code="b">O'Reilly</subfield><subfield code="c">2022</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2023</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxv, 834 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Die 1. Auflage erschien 2017 unter dem Titel "Hands-on machine learning with Scikit-Learn and TensorFlow"</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Copyright-Datum: © 2023, Rückseite der Titelseite</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Hier auch später erschienene, unveränderte Nachdrucke</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Use Scikit-learn to track an example ML project end to end Explore several models, including support vector machines, decision trees, random forests, and ensemble methods Exploit unsupervised learning techniques such as dimensionality reduction, clustering, and anomaly detection Dive into neural net architectures, including convolutional nets, recurrent nets, generative adversarial networks, autoencoders, diffusion models, and transformers Use TensorFlow and Keras to build and train neural nets for computer vision, natural language processing, generative models, and deep reinforcement learning</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Through a recent series of breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This bestselling book uses concrete examples, minimal theory, and production-ready Python frameworks (Scikit-Learn, Keras, and TensorFlow) to help you gain an intuitive understanding of the concepts and tools for building intelligent systems. With this updated third edition, author Aurélien Géron explores a range of techniques, starting with simple linear regression and progressing to deep neural networks. Numerous code examples and exercises throughout the book help you apply what you've learned. Programming experience is all you need to get started.</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Programmbibliothek</subfield><subfield code="0">(DE-588)4121521-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Keras</subfield><subfield code="g">Framework, Informatik</subfield><subfield code="0">(DE-588)1160521077</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">TensorFlow</subfield><subfield code="0">(DE-588)1153577011</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">ChatGPT</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">DALL-E</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Programmbibliothek</subfield><subfield code="0">(DE-588)4121521-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="4"><subfield code="a">Keras</subfield><subfield code="g">Framework, Informatik</subfield><subfield code="0">(DE-588)1160521077</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="2"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="3"><subfield code="a">Keras</subfield><subfield code="g">Framework, Informatik</subfield><subfield code="0">(DE-588)1160521077</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="4"><subfield code="a">TensorFlow</subfield><subfield code="0">(DE-588)1153577011</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-1-098-12246-1</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-1-098-12596-7</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033747193&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033747193</subfield></datafield></record></collection> |
id | DE-604.BV048368110 |
illustrated | Illustrated |
index_date | 2024-07-03T20:16:01Z |
indexdate | 2024-08-01T11:32:40Z |
institution | BVB |
isbn | 9781098125974 1098125975 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033747193 |
oclc_num | 1350782021 |
open_access_boolean | |
owner | DE-1102 DE-862 DE-BY-FWS DE-Aug4 DE-860 DE-573 DE-19 DE-BY-UBM DE-1043 DE-20 DE-29T DE-945 DE-703 DE-188 DE-355 DE-BY-UBR DE-861 DE-11 DE-859 DE-91G DE-BY-TUM |
owner_facet | DE-1102 DE-862 DE-BY-FWS DE-Aug4 DE-860 DE-573 DE-19 DE-BY-UBM DE-1043 DE-20 DE-29T DE-945 DE-703 DE-188 DE-355 DE-BY-UBR DE-861 DE-11 DE-859 DE-91G DE-BY-TUM |
physical | xxv, 834 Seiten Illustrationen, Diagramme |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | O'Reilly |
record_format | marc |
spellingShingle | Géron, Aurélien Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems Use Scikit-learn to track an example ML project end to end Explore several models, including support vector machines, decision trees, random forests, and ensemble methods Exploit unsupervised learning techniques such as dimensionality reduction, clustering, and anomaly detection Dive into neural net architectures, including convolutional nets, recurrent nets, generative adversarial networks, autoencoders, diffusion models, and transformers Use TensorFlow and Keras to build and train neural nets for computer vision, natural language processing, generative models, and deep reinforcement learning Künstliche Intelligenz (DE-588)4033447-8 gnd Programmbibliothek (DE-588)4121521-7 gnd Python Programmiersprache (DE-588)4434275-5 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Neuronales Netz (DE-588)4226127-2 gnd Keras Framework, Informatik (DE-588)1160521077 gnd TensorFlow (DE-588)1153577011 gnd |
subject_GND | (DE-588)4033447-8 (DE-588)4121521-7 (DE-588)4434275-5 (DE-588)4193754-5 (DE-588)4226127-2 (DE-588)1160521077 (DE-588)1153577011 |
title | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems |
title_alt | Hands-on machine learning with Scikit-Learn and TensorFlow Hands-on machine learning with Scikit-Learn, Keras & TensorFlow |
title_auth | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems |
title_exact_search | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems |
title_exact_search_txtP | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems |
title_full | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems Aurélien Géron |
title_fullStr | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems Aurélien Géron |
title_full_unstemmed | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow concepts, tools, and techniques to build intelligent systems Aurélien Géron |
title_short | Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow |
title_sort | hands on machine learning with scikit learn keras and tensorflow concepts tools and techniques to build intelligent systems |
title_sub | concepts, tools, and techniques to build intelligent systems |
topic | Künstliche Intelligenz (DE-588)4033447-8 gnd Programmbibliothek (DE-588)4121521-7 gnd Python Programmiersprache (DE-588)4434275-5 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Neuronales Netz (DE-588)4226127-2 gnd Keras Framework, Informatik (DE-588)1160521077 gnd TensorFlow (DE-588)1153577011 gnd |
topic_facet | Künstliche Intelligenz Programmbibliothek Python Programmiersprache Maschinelles Lernen Neuronales Netz Keras Framework, Informatik TensorFlow |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=033747193&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT geronaurelien handsonmachinelearningwithscikitlearnandtensorflow AT geronaurelien handsonmachinelearningwithscikitlearnkerasandtensorflowconceptstoolsandtechniquestobuildintelligentsystems AT geronaurelien handsonmachinelearningwithscikitlearnkerastensorflow |
Inhaltsverzeichnis
THWS Schweinfurt Zentralbibliothek Lesesaal
Signatur: |
2000 ST 302 G377(3) |
---|---|
Exemplar 1 | ausleihbar Verfügbar Bestellen |
Sonderstandort Fakultät
Signatur: |
2000 ST 302 G377(3) |
---|---|
Exemplar 1 | nicht ausleihbar Checked out – Rückgabe bis: 31.12.2099 Vormerken |