Statistics for machine learning :: build supervised, unsupervised, and reinforcement learning models using both Python and R /
Build Machine Learning models with a sound statistical understanding. About This Book Learn about the statistics behind powerful predictive models with p-value, ANOVA, and F- statistics. Implement statistical computations programmatically for supervised and unsupervised learning through K-means clus...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham, UK :
Packt Publishing,
2017.
|
Schlagworte: | |
Online-Zugang: | DE-862 DE-863 |
Zusammenfassung: | Build Machine Learning models with a sound statistical understanding. About This Book Learn about the statistics behind powerful predictive models with p-value, ANOVA, and F- statistics. Implement statistical computations programmatically for supervised and unsupervised learning through K-means clustering. Master the statistical aspect of Machine Learning with the help of this example-rich guide to R and Python. Who This Book Is For This book is intended for developers with little to no background in statistics, who want to implement Machine Learning in their systems. Some programming knowledge in R or Python will be useful. What You Will Learn Understand the Statistical and Machine Learning fundamentals necessary to build models Understand the major differences and parallels between the statistical way and the Machine Learning way to solve problems Learn how to prepare data and feed models by using the appropriate Machine Learning algorithms from the more-than-adequate R and Python packages Analyze the results and tune the model appropriately to your own predictive goals Understand the concepts of required statistics for Machine Learning Introduce yourself to necessary fundamentals required for building supervised & unsupervised deep learning models Learn reinforcement learning and its application in the field of artificial intelligence domain In Detail Complex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. This book will teach you all it takes to perform complex statistical computations required for Machine Learning. You will gain information on statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. You will also design programs for performing tasks such as model, parameter fitting, regression, classification, density collection, and more. By the end of the book, you will have mastered the required statistics for Machine Learning and will be able to apply your new skills to any sort of industry problem. Style and approach This practical, step-by-step guide will give you an understanding of the Statistical and Machine Learning fundamentals you'll need to build models. Downloading the example code for this book. You can download the example code files for al ... |
Beschreibung: | 1 online resource (1 volume) : illustrations |
ISBN: | 9781788291224 1788291220 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-on1000390984 | ||
003 | OCoLC | ||
005 | 20250103110447.0 | ||
006 | m o d | ||
007 | cr unu|||||||| | ||
008 | 170811s2017 enka o 000 0 eng d | ||
040 | |a UMI |b eng |e rda |e pn |c UMI |d IDEBK |d TOH |d STF |d COO |d N$T |d UOK |d CEF |d OCLCF |d KSU |d UAB |d MM9 |d QGK |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d DXU | ||
019 | |a 1171043140 | ||
020 | |a 9781788291224 |q (electronic bk.) | ||
020 | |a 1788291220 |q (electronic bk.) | ||
020 | |z 9781788295758 | ||
035 | |a (OCoLC)1000390984 |z (OCoLC)1171043140 | ||
037 | |a CL0500000883 |b Safari Books Online | ||
050 | 4 | |a QA76.73.P98 | |
072 | 7 | |a COM |x 051360 |2 bisacsh | |
072 | 7 | |a COM |x 018000 |2 bisacsh | |
082 | 7 | |a 005.7 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Dangeti, Pratap, |e author. | |
245 | 1 | 0 | |a Statistics for machine learning : |b build supervised, unsupervised, and reinforcement learning models using both Python and R / |c Pratap Dangeti. |
264 | 1 | |a Birmingham, UK : |b Packt Publishing, |c 2017. | |
300 | |a 1 online resource (1 volume) : |b illustrations | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
588 | 0 | |a Online resource; title from PDF title page (EBSCO, viewed February 8, 2018) | |
520 | |a Build Machine Learning models with a sound statistical understanding. About This Book Learn about the statistics behind powerful predictive models with p-value, ANOVA, and F- statistics. Implement statistical computations programmatically for supervised and unsupervised learning through K-means clustering. Master the statistical aspect of Machine Learning with the help of this example-rich guide to R and Python. Who This Book Is For This book is intended for developers with little to no background in statistics, who want to implement Machine Learning in their systems. Some programming knowledge in R or Python will be useful. What You Will Learn Understand the Statistical and Machine Learning fundamentals necessary to build models Understand the major differences and parallels between the statistical way and the Machine Learning way to solve problems Learn how to prepare data and feed models by using the appropriate Machine Learning algorithms from the more-than-adequate R and Python packages Analyze the results and tune the model appropriately to your own predictive goals Understand the concepts of required statistics for Machine Learning Introduce yourself to necessary fundamentals required for building supervised & unsupervised deep learning models Learn reinforcement learning and its application in the field of artificial intelligence domain In Detail Complex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. This book will teach you all it takes to perform complex statistical computations required for Machine Learning. You will gain information on statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. You will also design programs for performing tasks such as model, parameter fitting, regression, classification, density collection, and more. By the end of the book, you will have mastered the required statistics for Machine Learning and will be able to apply your new skills to any sort of industry problem. Style and approach This practical, step-by-step guide will give you an understanding of the Statistical and Machine Learning fundamentals you'll need to build models. Downloading the example code for this book. You can download the example code files for al ... | ||
505 | 0 | |a Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Journey from Statistics to Machine Learning -- Statistical terminology for model building and validation -- Machine learning -- Major differences between statistical modeling and machine learning -- Steps in machine learning model development and deployment -- Statistical fundamentals and terminology for model building and validation -- Bias versus variance trade-off -- Train and test data -- Machine learning terminology for model building and validation -- Linear regression versus gradient descent -- Machine learning losses -- When to stop tuning machine learning models -- Train, validation, and test data -- Cross-validation -- Grid search -- Machine learning model overview -- Summary -- Chapter 2: Parallelism of Statistics and Machine Learning -- Comparison between regression and machine learning models -- Compensating factors in machine learning models -- Assumptions of linear regression -- Steps applied in linear regression modeling -- Example of simple linear regression from first principles -- Example of simple linear regression using the wine quality data -- Example of multilinear regression -- step-by-step methodology of model building -- Backward and forward selection -- Machine learning models -- ridge and lasso regression -- Example of ridge regression machine learning -- Example of lasso regression machine learning model -- Regularization parameters in linear regression and ridge/lasso regression -- Summary -- Chapter 3: Logistic Regression Versus Random Forest -- Maximum likelihood estimation -- Logistic regression -- introduction and advantages -- Terminology involved in logistic regression -- Applying steps in logistic regression modeling. | |
505 | 8 | |a Example of logistic regression using German credit data -- Random forest -- Example of random forest using German credit data -- Grid search on random forest -- Variable importance plot -- Comparison of logistic regression with random forest -- Summary -- Chapter 4: Tree-Based Machine Learning Models -- Introducing decision tree classifiers -- Terminology used in decision trees -- Decision tree working methodology from first principles -- Comparison between logistic regression and decision trees -- Comparison of error components across various styles of models -- Remedial actions to push the model towards the ideal region -- HR attrition data example -- Decision tree classifier -- Tuning class weights in decision tree classifier -- Bagging classifier -- Random forest classifier -- Random forest classifier -- grid search -- AdaBoost classifier -- Gradient boosting classifier -- Comparison between AdaBoosting versus gradient boosting -- Extreme gradient boosting -- XGBoost classifier -- Ensemble of ensembles -- model stacking -- Ensemble of ensembles with different types of classifiers -- Ensemble of ensembles with bootstrap samples using a single type of classifier -- Summary -- Chapter 5: K-Nearest Neighbors and Naive Bayes -- K-nearest neighbors -- KNN voter example -- Curse of dimensionality -- Curse of dimensionality with 1D, 2D, and 3D example -- KNN classifier with breast cancer Wisconsin data example -- Tuning of k-value in KNN classifier -- Naive Bayes -- Probability fundamentals -- Joint probability -- Understanding Bayes theorem with conditional probability -- Naive Bayes classification -- Laplace estimator -- Naive Bayes SMS spam classification example -- Summary -- Chapter 6: Support Vector Machines and Neural Networks -- Support vector machines working principles -- Maximum margin classifier -- Support vector classifier. | |
505 | 8 | |a Support vector machines -- Kernel functions -- SVM multilabel classifier with letter recognition data example -- Maximum margin classifier -- linear kernel -- Polynomial kernel -- RBF kernel -- Artificial neural networks -- ANN -- Activation functions -- Forward propagation and backpropagation -- Optimization of neural networks -- Stochastic gradient descent -- SGD -- Momentum -- Nesterov accelerated gradient -- NAG -- Adagrad -- Adadelta -- RMSprop -- Adaptive moment estimation -- Adam -- Limited-memory broyden-fletcher-goldfarb-shanno -- L-BFGS optimization algorithm -- Dropout in neural networks -- ANN classifier applied on handwritten digits using scikit-learn -- Introduction to deep learning -- Solving methodology -- Deep learning software -- Deep neural network classifier applied on handwritten digits using Keras -- Summary -- Chapter 7: Recommendation Engines -- Content-based filtering -- Cosine similarity -- Collaborative filtering -- Advantages of collaborative filtering over content-based filtering -- Matrix factorization using the alternating least squares algorithm for collaborative filtering -- Evaluation of recommendation engine model -- Hyperparameter selection in recommendation engines using grid search -- Recommendation engine application on movie lens data -- User-user similarity matrix -- Movie-movie similarity matrix -- Collaborative filtering using ALS -- Grid search on collaborative filtering -- Summary -- Chapter 8: Unsupervised Learning -- K-means clustering -- K-means working methodology from first principles -- Optimal number of clusters and cluster evaluation -- The elbow method -- K-means clustering with the iris data example -- Principal component analysis -- PCA -- PCA working methodology from first principles -- PCA applied on handwritten digits using scikit-learn -- Singular value decomposition -- SVD. | |
505 | 8 | |a SVD applied on handwritten digits using scikit-learn -- Deep auto encoders -- Model building technique using encoder-decoder architecture -- Deep auto encoders applied on handwritten digits using Keras -- Summary -- Chapter 9: Reinforcement Learning -- Introduction to reinforcement learning -- Comparing supervised, unsupervised, and reinforcement learning in detail -- Characteristics of reinforcement learning -- Reinforcement learning basics -- Category 1 -- value based -- Category 2 -- policy based -- Category 3 -- actor-critic -- Category 4 -- model-free -- Category 5 -- model-based -- Fundamental categories in sequential decision making -- Markov decision processes and Bellman equations -- Dynamic programming -- Algorithms to compute optimal policy using dynamic programming -- Grid world example using value and policy iteration algorithms with basic Python -- Monte Carlo methods -- Comparison between dynamic programming and Monte Carlo methods -- Key advantages of MC over DP methods -- Monte Carlo prediction -- The suitability of Monte Carlo prediction on grid-world problems -- Modeling Blackjack example of Monte Carlo methods using Python -- Temporal difference learning -- Comparison between Monte Carlo methods and temporal difference learning -- TD prediction -- Driving office example for TD learning -- SARSA on-policy TD control -- Q-learning -- off-policy TD control -- Cliff walking example of on-policy and off-policy of TD control -- Applications of reinforcement learning with integration of machine learning and deep learning -- Automotive vehicle control -- self-driving cars -- Google DeepMind's AlphaGo -- Robo soccer -- Further reading -- Summary -- Index. | |
650 | 0 | |a Big data |x Statistical methods. | |
650 | 0 | |a Machine learning. |0 http://id.loc.gov/authorities/subjects/sh85079324 | |
650 | 0 | |a Python (Computer program language) |0 http://id.loc.gov/authorities/subjects/sh96008834 | |
650 | 0 | |a R (Computer program language) |0 http://id.loc.gov/authorities/subjects/sh2002004407 | |
650 | 6 | |a Données volumineuses |x Méthodes statistiques. | |
650 | 6 | |a Apprentissage automatique. | |
650 | 6 | |a Python (Langage de programmation) | |
650 | 6 | |a R (Langage de programmation) | |
650 | 7 | |a COMPUTERS |x Programming Languages |x Python. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Data Processing. |2 bisacsh | |
650 | 7 | |a Machine learning |2 fast | |
650 | 7 | |a Python (Computer program language) |2 fast | |
650 | 7 | |a R (Computer program language) |2 fast | |
966 | 4 | 0 | |l DE-862 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1560931 |3 Volltext |
966 | 4 | 0 | |l DE-863 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1560931 |3 Volltext |
938 | |a EBSCOhost |b EBSC |n 1560931 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis38537669 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-862 | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-on1000390984 |
---|---|
_version_ | 1829095115355848704 |
adam_text | |
any_adam_object | |
author | Dangeti, Pratap |
author_facet | Dangeti, Pratap |
author_role | aut |
author_sort | Dangeti, Pratap |
author_variant | p d pd |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.73.P98 |
callnumber-search | QA76.73.P98 |
callnumber-sort | QA 276.73 P98 |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBA |
contents | Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Journey from Statistics to Machine Learning -- Statistical terminology for model building and validation -- Machine learning -- Major differences between statistical modeling and machine learning -- Steps in machine learning model development and deployment -- Statistical fundamentals and terminology for model building and validation -- Bias versus variance trade-off -- Train and test data -- Machine learning terminology for model building and validation -- Linear regression versus gradient descent -- Machine learning losses -- When to stop tuning machine learning models -- Train, validation, and test data -- Cross-validation -- Grid search -- Machine learning model overview -- Summary -- Chapter 2: Parallelism of Statistics and Machine Learning -- Comparison between regression and machine learning models -- Compensating factors in machine learning models -- Assumptions of linear regression -- Steps applied in linear regression modeling -- Example of simple linear regression from first principles -- Example of simple linear regression using the wine quality data -- Example of multilinear regression -- step-by-step methodology of model building -- Backward and forward selection -- Machine learning models -- ridge and lasso regression -- Example of ridge regression machine learning -- Example of lasso regression machine learning model -- Regularization parameters in linear regression and ridge/lasso regression -- Summary -- Chapter 3: Logistic Regression Versus Random Forest -- Maximum likelihood estimation -- Logistic regression -- introduction and advantages -- Terminology involved in logistic regression -- Applying steps in logistic regression modeling. Example of logistic regression using German credit data -- Random forest -- Example of random forest using German credit data -- Grid search on random forest -- Variable importance plot -- Comparison of logistic regression with random forest -- Summary -- Chapter 4: Tree-Based Machine Learning Models -- Introducing decision tree classifiers -- Terminology used in decision trees -- Decision tree working methodology from first principles -- Comparison between logistic regression and decision trees -- Comparison of error components across various styles of models -- Remedial actions to push the model towards the ideal region -- HR attrition data example -- Decision tree classifier -- Tuning class weights in decision tree classifier -- Bagging classifier -- Random forest classifier -- Random forest classifier -- grid search -- AdaBoost classifier -- Gradient boosting classifier -- Comparison between AdaBoosting versus gradient boosting -- Extreme gradient boosting -- XGBoost classifier -- Ensemble of ensembles -- model stacking -- Ensemble of ensembles with different types of classifiers -- Ensemble of ensembles with bootstrap samples using a single type of classifier -- Summary -- Chapter 5: K-Nearest Neighbors and Naive Bayes -- K-nearest neighbors -- KNN voter example -- Curse of dimensionality -- Curse of dimensionality with 1D, 2D, and 3D example -- KNN classifier with breast cancer Wisconsin data example -- Tuning of k-value in KNN classifier -- Naive Bayes -- Probability fundamentals -- Joint probability -- Understanding Bayes theorem with conditional probability -- Naive Bayes classification -- Laplace estimator -- Naive Bayes SMS spam classification example -- Summary -- Chapter 6: Support Vector Machines and Neural Networks -- Support vector machines working principles -- Maximum margin classifier -- Support vector classifier. Support vector machines -- Kernel functions -- SVM multilabel classifier with letter recognition data example -- Maximum margin classifier -- linear kernel -- Polynomial kernel -- RBF kernel -- Artificial neural networks -- ANN -- Activation functions -- Forward propagation and backpropagation -- Optimization of neural networks -- Stochastic gradient descent -- SGD -- Momentum -- Nesterov accelerated gradient -- NAG -- Adagrad -- Adadelta -- RMSprop -- Adaptive moment estimation -- Adam -- Limited-memory broyden-fletcher-goldfarb-shanno -- L-BFGS optimization algorithm -- Dropout in neural networks -- ANN classifier applied on handwritten digits using scikit-learn -- Introduction to deep learning -- Solving methodology -- Deep learning software -- Deep neural network classifier applied on handwritten digits using Keras -- Summary -- Chapter 7: Recommendation Engines -- Content-based filtering -- Cosine similarity -- Collaborative filtering -- Advantages of collaborative filtering over content-based filtering -- Matrix factorization using the alternating least squares algorithm for collaborative filtering -- Evaluation of recommendation engine model -- Hyperparameter selection in recommendation engines using grid search -- Recommendation engine application on movie lens data -- User-user similarity matrix -- Movie-movie similarity matrix -- Collaborative filtering using ALS -- Grid search on collaborative filtering -- Summary -- Chapter 8: Unsupervised Learning -- K-means clustering -- K-means working methodology from first principles -- Optimal number of clusters and cluster evaluation -- The elbow method -- K-means clustering with the iris data example -- Principal component analysis -- PCA -- PCA working methodology from first principles -- PCA applied on handwritten digits using scikit-learn -- Singular value decomposition -- SVD. SVD applied on handwritten digits using scikit-learn -- Deep auto encoders -- Model building technique using encoder-decoder architecture -- Deep auto encoders applied on handwritten digits using Keras -- Summary -- Chapter 9: Reinforcement Learning -- Introduction to reinforcement learning -- Comparing supervised, unsupervised, and reinforcement learning in detail -- Characteristics of reinforcement learning -- Reinforcement learning basics -- Category 1 -- value based -- Category 2 -- policy based -- Category 3 -- actor-critic -- Category 4 -- model-free -- Category 5 -- model-based -- Fundamental categories in sequential decision making -- Markov decision processes and Bellman equations -- Dynamic programming -- Algorithms to compute optimal policy using dynamic programming -- Grid world example using value and policy iteration algorithms with basic Python -- Monte Carlo methods -- Comparison between dynamic programming and Monte Carlo methods -- Key advantages of MC over DP methods -- Monte Carlo prediction -- The suitability of Monte Carlo prediction on grid-world problems -- Modeling Blackjack example of Monte Carlo methods using Python -- Temporal difference learning -- Comparison between Monte Carlo methods and temporal difference learning -- TD prediction -- Driving office example for TD learning -- SARSA on-policy TD control -- Q-learning -- off-policy TD control -- Cliff walking example of on-policy and off-policy of TD control -- Applications of reinforcement learning with integration of machine learning and deep learning -- Automotive vehicle control -- self-driving cars -- Google DeepMind's AlphaGo -- Robo soccer -- Further reading -- Summary -- Index. |
ctrlnum | (OCoLC)1000390984 |
dewey-full | 005.7 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 005 - Computer programming, programs, data, security |
dewey-raw | 005.7 |
dewey-search | 005.7 |
dewey-sort | 15.7 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>12176cam a2200601 i 4500</leader><controlfield tag="001">ZDB-4-EBA-on1000390984</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20250103110447.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr unu||||||||</controlfield><controlfield tag="008">170811s2017 enka o 000 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">UMI</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">UMI</subfield><subfield code="d">IDEBK</subfield><subfield code="d">TOH</subfield><subfield code="d">STF</subfield><subfield code="d">COO</subfield><subfield code="d">N$T</subfield><subfield code="d">UOK</subfield><subfield code="d">CEF</subfield><subfield code="d">OCLCF</subfield><subfield code="d">KSU</subfield><subfield code="d">UAB</subfield><subfield code="d">MM9</subfield><subfield code="d">QGK</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">DXU</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">1171043140</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781788291224</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1788291220</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781788295758</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1000390984</subfield><subfield code="z">(OCoLC)1171043140</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">CL0500000883</subfield><subfield code="b">Safari Books Online</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.73.P98</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">051360</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">018000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">005.7</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Dangeti, Pratap,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Statistics for machine learning :</subfield><subfield code="b">build supervised, unsupervised, and reinforcement learning models using both Python and R /</subfield><subfield code="c">Pratap Dangeti.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham, UK :</subfield><subfield code="b">Packt Publishing,</subfield><subfield code="c">2017.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (1 volume) :</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Online resource; title from PDF title page (EBSCO, viewed February 8, 2018)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Build Machine Learning models with a sound statistical understanding. About This Book Learn about the statistics behind powerful predictive models with p-value, ANOVA, and F- statistics. Implement statistical computations programmatically for supervised and unsupervised learning through K-means clustering. Master the statistical aspect of Machine Learning with the help of this example-rich guide to R and Python. Who This Book Is For This book is intended for developers with little to no background in statistics, who want to implement Machine Learning in their systems. Some programming knowledge in R or Python will be useful. What You Will Learn Understand the Statistical and Machine Learning fundamentals necessary to build models Understand the major differences and parallels between the statistical way and the Machine Learning way to solve problems Learn how to prepare data and feed models by using the appropriate Machine Learning algorithms from the more-than-adequate R and Python packages Analyze the results and tune the model appropriately to your own predictive goals Understand the concepts of required statistics for Machine Learning Introduce yourself to necessary fundamentals required for building supervised & unsupervised deep learning models Learn reinforcement learning and its application in the field of artificial intelligence domain In Detail Complex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. This book will teach you all it takes to perform complex statistical computations required for Machine Learning. You will gain information on statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. You will also design programs for performing tasks such as model, parameter fitting, regression, classification, density collection, and more. By the end of the book, you will have mastered the required statistics for Machine Learning and will be able to apply your new skills to any sort of industry problem. Style and approach This practical, step-by-step guide will give you an understanding of the Statistical and Machine Learning fundamentals you'll need to build models. Downloading the example code for this book. You can download the example code files for al ...</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Journey from Statistics to Machine Learning -- Statistical terminology for model building and validation -- Machine learning -- Major differences between statistical modeling and machine learning -- Steps in machine learning model development and deployment -- Statistical fundamentals and terminology for model building and validation -- Bias versus variance trade-off -- Train and test data -- Machine learning terminology for model building and validation -- Linear regression versus gradient descent -- Machine learning losses -- When to stop tuning machine learning models -- Train, validation, and test data -- Cross-validation -- Grid search -- Machine learning model overview -- Summary -- Chapter 2: Parallelism of Statistics and Machine Learning -- Comparison between regression and machine learning models -- Compensating factors in machine learning models -- Assumptions of linear regression -- Steps applied in linear regression modeling -- Example of simple linear regression from first principles -- Example of simple linear regression using the wine quality data -- Example of multilinear regression -- step-by-step methodology of model building -- Backward and forward selection -- Machine learning models -- ridge and lasso regression -- Example of ridge regression machine learning -- Example of lasso regression machine learning model -- Regularization parameters in linear regression and ridge/lasso regression -- Summary -- Chapter 3: Logistic Regression Versus Random Forest -- Maximum likelihood estimation -- Logistic regression -- introduction and advantages -- Terminology involved in logistic regression -- Applying steps in logistic regression modeling.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Example of logistic regression using German credit data -- Random forest -- Example of random forest using German credit data -- Grid search on random forest -- Variable importance plot -- Comparison of logistic regression with random forest -- Summary -- Chapter 4: Tree-Based Machine Learning Models -- Introducing decision tree classifiers -- Terminology used in decision trees -- Decision tree working methodology from first principles -- Comparison between logistic regression and decision trees -- Comparison of error components across various styles of models -- Remedial actions to push the model towards the ideal region -- HR attrition data example -- Decision tree classifier -- Tuning class weights in decision tree classifier -- Bagging classifier -- Random forest classifier -- Random forest classifier -- grid search -- AdaBoost classifier -- Gradient boosting classifier -- Comparison between AdaBoosting versus gradient boosting -- Extreme gradient boosting -- XGBoost classifier -- Ensemble of ensembles -- model stacking -- Ensemble of ensembles with different types of classifiers -- Ensemble of ensembles with bootstrap samples using a single type of classifier -- Summary -- Chapter 5: K-Nearest Neighbors and Naive Bayes -- K-nearest neighbors -- KNN voter example -- Curse of dimensionality -- Curse of dimensionality with 1D, 2D, and 3D example -- KNN classifier with breast cancer Wisconsin data example -- Tuning of k-value in KNN classifier -- Naive Bayes -- Probability fundamentals -- Joint probability -- Understanding Bayes theorem with conditional probability -- Naive Bayes classification -- Laplace estimator -- Naive Bayes SMS spam classification example -- Summary -- Chapter 6: Support Vector Machines and Neural Networks -- Support vector machines working principles -- Maximum margin classifier -- Support vector classifier.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Support vector machines -- Kernel functions -- SVM multilabel classifier with letter recognition data example -- Maximum margin classifier -- linear kernel -- Polynomial kernel -- RBF kernel -- Artificial neural networks -- ANN -- Activation functions -- Forward propagation and backpropagation -- Optimization of neural networks -- Stochastic gradient descent -- SGD -- Momentum -- Nesterov accelerated gradient -- NAG -- Adagrad -- Adadelta -- RMSprop -- Adaptive moment estimation -- Adam -- Limited-memory broyden-fletcher-goldfarb-shanno -- L-BFGS optimization algorithm -- Dropout in neural networks -- ANN classifier applied on handwritten digits using scikit-learn -- Introduction to deep learning -- Solving methodology -- Deep learning software -- Deep neural network classifier applied on handwritten digits using Keras -- Summary -- Chapter 7: Recommendation Engines -- Content-based filtering -- Cosine similarity -- Collaborative filtering -- Advantages of collaborative filtering over content-based filtering -- Matrix factorization using the alternating least squares algorithm for collaborative filtering -- Evaluation of recommendation engine model -- Hyperparameter selection in recommendation engines using grid search -- Recommendation engine application on movie lens data -- User-user similarity matrix -- Movie-movie similarity matrix -- Collaborative filtering using ALS -- Grid search on collaborative filtering -- Summary -- Chapter 8: Unsupervised Learning -- K-means clustering -- K-means working methodology from first principles -- Optimal number of clusters and cluster evaluation -- The elbow method -- K-means clustering with the iris data example -- Principal component analysis -- PCA -- PCA working methodology from first principles -- PCA applied on handwritten digits using scikit-learn -- Singular value decomposition -- SVD.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">SVD applied on handwritten digits using scikit-learn -- Deep auto encoders -- Model building technique using encoder-decoder architecture -- Deep auto encoders applied on handwritten digits using Keras -- Summary -- Chapter 9: Reinforcement Learning -- Introduction to reinforcement learning -- Comparing supervised, unsupervised, and reinforcement learning in detail -- Characteristics of reinforcement learning -- Reinforcement learning basics -- Category 1 -- value based -- Category 2 -- policy based -- Category 3 -- actor-critic -- Category 4 -- model-free -- Category 5 -- model-based -- Fundamental categories in sequential decision making -- Markov decision processes and Bellman equations -- Dynamic programming -- Algorithms to compute optimal policy using dynamic programming -- Grid world example using value and policy iteration algorithms with basic Python -- Monte Carlo methods -- Comparison between dynamic programming and Monte Carlo methods -- Key advantages of MC over DP methods -- Monte Carlo prediction -- The suitability of Monte Carlo prediction on grid-world problems -- Modeling Blackjack example of Monte Carlo methods using Python -- Temporal difference learning -- Comparison between Monte Carlo methods and temporal difference learning -- TD prediction -- Driving office example for TD learning -- SARSA on-policy TD control -- Q-learning -- off-policy TD control -- Cliff walking example of on-policy and off-policy of TD control -- Applications of reinforcement learning with integration of machine learning and deep learning -- Automotive vehicle control -- self-driving cars -- Google DeepMind's AlphaGo -- Robo soccer -- Further reading -- Summary -- Index.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Big data</subfield><subfield code="x">Statistical methods.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Machine learning.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85079324</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Python (Computer program language)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh96008834</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">R (Computer program language)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh2002004407</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Données volumineuses</subfield><subfield code="x">Méthodes statistiques.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Apprentissage automatique.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Python (Langage de programmation)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">R (Langage de programmation)</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Programming Languages</subfield><subfield code="x">Python.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Data Processing.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Machine learning</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Python (Computer program language)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">R (Computer program language)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-862</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1560931</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-863</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1560931</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1560931</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis38537669</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-862</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-on1000390984 |
illustrated | Illustrated |
indexdate | 2025-04-11T08:43:53Z |
institution | BVB |
isbn | 9781788291224 1788291220 |
language | English |
oclc_num | 1000390984 |
open_access_boolean | |
owner | MAIN DE-862 DE-BY-FWS DE-863 DE-BY-FWS |
owner_facet | MAIN DE-862 DE-BY-FWS DE-863 DE-BY-FWS |
physical | 1 online resource (1 volume) : illustrations |
psigel | ZDB-4-EBA FWS_PDA_EBA ZDB-4-EBA |
publishDate | 2017 |
publishDateSearch | 2017 |
publishDateSort | 2017 |
publisher | Packt Publishing, |
record_format | marc |
spelling | Dangeti, Pratap, author. Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / Pratap Dangeti. Birmingham, UK : Packt Publishing, 2017. 1 online resource (1 volume) : illustrations text txt rdacontent computer c rdamedia online resource cr rdacarrier Online resource; title from PDF title page (EBSCO, viewed February 8, 2018) Build Machine Learning models with a sound statistical understanding. About This Book Learn about the statistics behind powerful predictive models with p-value, ANOVA, and F- statistics. Implement statistical computations programmatically for supervised and unsupervised learning through K-means clustering. Master the statistical aspect of Machine Learning with the help of this example-rich guide to R and Python. Who This Book Is For This book is intended for developers with little to no background in statistics, who want to implement Machine Learning in their systems. Some programming knowledge in R or Python will be useful. What You Will Learn Understand the Statistical and Machine Learning fundamentals necessary to build models Understand the major differences and parallels between the statistical way and the Machine Learning way to solve problems Learn how to prepare data and feed models by using the appropriate Machine Learning algorithms from the more-than-adequate R and Python packages Analyze the results and tune the model appropriately to your own predictive goals Understand the concepts of required statistics for Machine Learning Introduce yourself to necessary fundamentals required for building supervised & unsupervised deep learning models Learn reinforcement learning and its application in the field of artificial intelligence domain In Detail Complex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. This book will teach you all it takes to perform complex statistical computations required for Machine Learning. You will gain information on statistics behind supervised learning, unsupervised learning, reinforcement learning, and more. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. You will also design programs for performing tasks such as model, parameter fitting, regression, classification, density collection, and more. By the end of the book, you will have mastered the required statistics for Machine Learning and will be able to apply your new skills to any sort of industry problem. Style and approach This practical, step-by-step guide will give you an understanding of the Statistical and Machine Learning fundamentals you'll need to build models. Downloading the example code for this book. You can download the example code files for al ... Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Journey from Statistics to Machine Learning -- Statistical terminology for model building and validation -- Machine learning -- Major differences between statistical modeling and machine learning -- Steps in machine learning model development and deployment -- Statistical fundamentals and terminology for model building and validation -- Bias versus variance trade-off -- Train and test data -- Machine learning terminology for model building and validation -- Linear regression versus gradient descent -- Machine learning losses -- When to stop tuning machine learning models -- Train, validation, and test data -- Cross-validation -- Grid search -- Machine learning model overview -- Summary -- Chapter 2: Parallelism of Statistics and Machine Learning -- Comparison between regression and machine learning models -- Compensating factors in machine learning models -- Assumptions of linear regression -- Steps applied in linear regression modeling -- Example of simple linear regression from first principles -- Example of simple linear regression using the wine quality data -- Example of multilinear regression -- step-by-step methodology of model building -- Backward and forward selection -- Machine learning models -- ridge and lasso regression -- Example of ridge regression machine learning -- Example of lasso regression machine learning model -- Regularization parameters in linear regression and ridge/lasso regression -- Summary -- Chapter 3: Logistic Regression Versus Random Forest -- Maximum likelihood estimation -- Logistic regression -- introduction and advantages -- Terminology involved in logistic regression -- Applying steps in logistic regression modeling. Example of logistic regression using German credit data -- Random forest -- Example of random forest using German credit data -- Grid search on random forest -- Variable importance plot -- Comparison of logistic regression with random forest -- Summary -- Chapter 4: Tree-Based Machine Learning Models -- Introducing decision tree classifiers -- Terminology used in decision trees -- Decision tree working methodology from first principles -- Comparison between logistic regression and decision trees -- Comparison of error components across various styles of models -- Remedial actions to push the model towards the ideal region -- HR attrition data example -- Decision tree classifier -- Tuning class weights in decision tree classifier -- Bagging classifier -- Random forest classifier -- Random forest classifier -- grid search -- AdaBoost classifier -- Gradient boosting classifier -- Comparison between AdaBoosting versus gradient boosting -- Extreme gradient boosting -- XGBoost classifier -- Ensemble of ensembles -- model stacking -- Ensemble of ensembles with different types of classifiers -- Ensemble of ensembles with bootstrap samples using a single type of classifier -- Summary -- Chapter 5: K-Nearest Neighbors and Naive Bayes -- K-nearest neighbors -- KNN voter example -- Curse of dimensionality -- Curse of dimensionality with 1D, 2D, and 3D example -- KNN classifier with breast cancer Wisconsin data example -- Tuning of k-value in KNN classifier -- Naive Bayes -- Probability fundamentals -- Joint probability -- Understanding Bayes theorem with conditional probability -- Naive Bayes classification -- Laplace estimator -- Naive Bayes SMS spam classification example -- Summary -- Chapter 6: Support Vector Machines and Neural Networks -- Support vector machines working principles -- Maximum margin classifier -- Support vector classifier. Support vector machines -- Kernel functions -- SVM multilabel classifier with letter recognition data example -- Maximum margin classifier -- linear kernel -- Polynomial kernel -- RBF kernel -- Artificial neural networks -- ANN -- Activation functions -- Forward propagation and backpropagation -- Optimization of neural networks -- Stochastic gradient descent -- SGD -- Momentum -- Nesterov accelerated gradient -- NAG -- Adagrad -- Adadelta -- RMSprop -- Adaptive moment estimation -- Adam -- Limited-memory broyden-fletcher-goldfarb-shanno -- L-BFGS optimization algorithm -- Dropout in neural networks -- ANN classifier applied on handwritten digits using scikit-learn -- Introduction to deep learning -- Solving methodology -- Deep learning software -- Deep neural network classifier applied on handwritten digits using Keras -- Summary -- Chapter 7: Recommendation Engines -- Content-based filtering -- Cosine similarity -- Collaborative filtering -- Advantages of collaborative filtering over content-based filtering -- Matrix factorization using the alternating least squares algorithm for collaborative filtering -- Evaluation of recommendation engine model -- Hyperparameter selection in recommendation engines using grid search -- Recommendation engine application on movie lens data -- User-user similarity matrix -- Movie-movie similarity matrix -- Collaborative filtering using ALS -- Grid search on collaborative filtering -- Summary -- Chapter 8: Unsupervised Learning -- K-means clustering -- K-means working methodology from first principles -- Optimal number of clusters and cluster evaluation -- The elbow method -- K-means clustering with the iris data example -- Principal component analysis -- PCA -- PCA working methodology from first principles -- PCA applied on handwritten digits using scikit-learn -- Singular value decomposition -- SVD. SVD applied on handwritten digits using scikit-learn -- Deep auto encoders -- Model building technique using encoder-decoder architecture -- Deep auto encoders applied on handwritten digits using Keras -- Summary -- Chapter 9: Reinforcement Learning -- Introduction to reinforcement learning -- Comparing supervised, unsupervised, and reinforcement learning in detail -- Characteristics of reinforcement learning -- Reinforcement learning basics -- Category 1 -- value based -- Category 2 -- policy based -- Category 3 -- actor-critic -- Category 4 -- model-free -- Category 5 -- model-based -- Fundamental categories in sequential decision making -- Markov decision processes and Bellman equations -- Dynamic programming -- Algorithms to compute optimal policy using dynamic programming -- Grid world example using value and policy iteration algorithms with basic Python -- Monte Carlo methods -- Comparison between dynamic programming and Monte Carlo methods -- Key advantages of MC over DP methods -- Monte Carlo prediction -- The suitability of Monte Carlo prediction on grid-world problems -- Modeling Blackjack example of Monte Carlo methods using Python -- Temporal difference learning -- Comparison between Monte Carlo methods and temporal difference learning -- TD prediction -- Driving office example for TD learning -- SARSA on-policy TD control -- Q-learning -- off-policy TD control -- Cliff walking example of on-policy and off-policy of TD control -- Applications of reinforcement learning with integration of machine learning and deep learning -- Automotive vehicle control -- self-driving cars -- Google DeepMind's AlphaGo -- Robo soccer -- Further reading -- Summary -- Index. Big data Statistical methods. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Python (Computer program language) http://id.loc.gov/authorities/subjects/sh96008834 R (Computer program language) http://id.loc.gov/authorities/subjects/sh2002004407 Données volumineuses Méthodes statistiques. Apprentissage automatique. Python (Langage de programmation) R (Langage de programmation) COMPUTERS Programming Languages Python. bisacsh COMPUTERS Data Processing. bisacsh Machine learning fast Python (Computer program language) fast R (Computer program language) fast |
spellingShingle | Dangeti, Pratap Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Journey from Statistics to Machine Learning -- Statistical terminology for model building and validation -- Machine learning -- Major differences between statistical modeling and machine learning -- Steps in machine learning model development and deployment -- Statistical fundamentals and terminology for model building and validation -- Bias versus variance trade-off -- Train and test data -- Machine learning terminology for model building and validation -- Linear regression versus gradient descent -- Machine learning losses -- When to stop tuning machine learning models -- Train, validation, and test data -- Cross-validation -- Grid search -- Machine learning model overview -- Summary -- Chapter 2: Parallelism of Statistics and Machine Learning -- Comparison between regression and machine learning models -- Compensating factors in machine learning models -- Assumptions of linear regression -- Steps applied in linear regression modeling -- Example of simple linear regression from first principles -- Example of simple linear regression using the wine quality data -- Example of multilinear regression -- step-by-step methodology of model building -- Backward and forward selection -- Machine learning models -- ridge and lasso regression -- Example of ridge regression machine learning -- Example of lasso regression machine learning model -- Regularization parameters in linear regression and ridge/lasso regression -- Summary -- Chapter 3: Logistic Regression Versus Random Forest -- Maximum likelihood estimation -- Logistic regression -- introduction and advantages -- Terminology involved in logistic regression -- Applying steps in logistic regression modeling. Example of logistic regression using German credit data -- Random forest -- Example of random forest using German credit data -- Grid search on random forest -- Variable importance plot -- Comparison of logistic regression with random forest -- Summary -- Chapter 4: Tree-Based Machine Learning Models -- Introducing decision tree classifiers -- Terminology used in decision trees -- Decision tree working methodology from first principles -- Comparison between logistic regression and decision trees -- Comparison of error components across various styles of models -- Remedial actions to push the model towards the ideal region -- HR attrition data example -- Decision tree classifier -- Tuning class weights in decision tree classifier -- Bagging classifier -- Random forest classifier -- Random forest classifier -- grid search -- AdaBoost classifier -- Gradient boosting classifier -- Comparison between AdaBoosting versus gradient boosting -- Extreme gradient boosting -- XGBoost classifier -- Ensemble of ensembles -- model stacking -- Ensemble of ensembles with different types of classifiers -- Ensemble of ensembles with bootstrap samples using a single type of classifier -- Summary -- Chapter 5: K-Nearest Neighbors and Naive Bayes -- K-nearest neighbors -- KNN voter example -- Curse of dimensionality -- Curse of dimensionality with 1D, 2D, and 3D example -- KNN classifier with breast cancer Wisconsin data example -- Tuning of k-value in KNN classifier -- Naive Bayes -- Probability fundamentals -- Joint probability -- Understanding Bayes theorem with conditional probability -- Naive Bayes classification -- Laplace estimator -- Naive Bayes SMS spam classification example -- Summary -- Chapter 6: Support Vector Machines and Neural Networks -- Support vector machines working principles -- Maximum margin classifier -- Support vector classifier. Support vector machines -- Kernel functions -- SVM multilabel classifier with letter recognition data example -- Maximum margin classifier -- linear kernel -- Polynomial kernel -- RBF kernel -- Artificial neural networks -- ANN -- Activation functions -- Forward propagation and backpropagation -- Optimization of neural networks -- Stochastic gradient descent -- SGD -- Momentum -- Nesterov accelerated gradient -- NAG -- Adagrad -- Adadelta -- RMSprop -- Adaptive moment estimation -- Adam -- Limited-memory broyden-fletcher-goldfarb-shanno -- L-BFGS optimization algorithm -- Dropout in neural networks -- ANN classifier applied on handwritten digits using scikit-learn -- Introduction to deep learning -- Solving methodology -- Deep learning software -- Deep neural network classifier applied on handwritten digits using Keras -- Summary -- Chapter 7: Recommendation Engines -- Content-based filtering -- Cosine similarity -- Collaborative filtering -- Advantages of collaborative filtering over content-based filtering -- Matrix factorization using the alternating least squares algorithm for collaborative filtering -- Evaluation of recommendation engine model -- Hyperparameter selection in recommendation engines using grid search -- Recommendation engine application on movie lens data -- User-user similarity matrix -- Movie-movie similarity matrix -- Collaborative filtering using ALS -- Grid search on collaborative filtering -- Summary -- Chapter 8: Unsupervised Learning -- K-means clustering -- K-means working methodology from first principles -- Optimal number of clusters and cluster evaluation -- The elbow method -- K-means clustering with the iris data example -- Principal component analysis -- PCA -- PCA working methodology from first principles -- PCA applied on handwritten digits using scikit-learn -- Singular value decomposition -- SVD. SVD applied on handwritten digits using scikit-learn -- Deep auto encoders -- Model building technique using encoder-decoder architecture -- Deep auto encoders applied on handwritten digits using Keras -- Summary -- Chapter 9: Reinforcement Learning -- Introduction to reinforcement learning -- Comparing supervised, unsupervised, and reinforcement learning in detail -- Characteristics of reinforcement learning -- Reinforcement learning basics -- Category 1 -- value based -- Category 2 -- policy based -- Category 3 -- actor-critic -- Category 4 -- model-free -- Category 5 -- model-based -- Fundamental categories in sequential decision making -- Markov decision processes and Bellman equations -- Dynamic programming -- Algorithms to compute optimal policy using dynamic programming -- Grid world example using value and policy iteration algorithms with basic Python -- Monte Carlo methods -- Comparison between dynamic programming and Monte Carlo methods -- Key advantages of MC over DP methods -- Monte Carlo prediction -- The suitability of Monte Carlo prediction on grid-world problems -- Modeling Blackjack example of Monte Carlo methods using Python -- Temporal difference learning -- Comparison between Monte Carlo methods and temporal difference learning -- TD prediction -- Driving office example for TD learning -- SARSA on-policy TD control -- Q-learning -- off-policy TD control -- Cliff walking example of on-policy and off-policy of TD control -- Applications of reinforcement learning with integration of machine learning and deep learning -- Automotive vehicle control -- self-driving cars -- Google DeepMind's AlphaGo -- Robo soccer -- Further reading -- Summary -- Index. Big data Statistical methods. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Python (Computer program language) http://id.loc.gov/authorities/subjects/sh96008834 R (Computer program language) http://id.loc.gov/authorities/subjects/sh2002004407 Données volumineuses Méthodes statistiques. Apprentissage automatique. Python (Langage de programmation) R (Langage de programmation) COMPUTERS Programming Languages Python. bisacsh COMPUTERS Data Processing. bisacsh Machine learning fast Python (Computer program language) fast R (Computer program language) fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh85079324 http://id.loc.gov/authorities/subjects/sh96008834 http://id.loc.gov/authorities/subjects/sh2002004407 |
title | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / |
title_auth | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / |
title_exact_search | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / |
title_full | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / Pratap Dangeti. |
title_fullStr | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / Pratap Dangeti. |
title_full_unstemmed | Statistics for machine learning : build supervised, unsupervised, and reinforcement learning models using both Python and R / Pratap Dangeti. |
title_short | Statistics for machine learning : |
title_sort | statistics for machine learning build supervised unsupervised and reinforcement learning models using both python and r |
title_sub | build supervised, unsupervised, and reinforcement learning models using both Python and R / |
topic | Big data Statistical methods. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Python (Computer program language) http://id.loc.gov/authorities/subjects/sh96008834 R (Computer program language) http://id.loc.gov/authorities/subjects/sh2002004407 Données volumineuses Méthodes statistiques. Apprentissage automatique. Python (Langage de programmation) R (Langage de programmation) COMPUTERS Programming Languages Python. bisacsh COMPUTERS Data Processing. bisacsh Machine learning fast Python (Computer program language) fast R (Computer program language) fast |
topic_facet | Big data Statistical methods. Machine learning. Python (Computer program language) R (Computer program language) Données volumineuses Méthodes statistiques. Apprentissage automatique. Python (Langage de programmation) R (Langage de programmation) COMPUTERS Programming Languages Python. COMPUTERS Data Processing. Machine learning |
work_keys_str_mv | AT dangetipratap statisticsformachinelearningbuildsupervisedunsupervisedandreinforcementlearningmodelsusingbothpythonandr |