Apache Spark machine learning blueprints :: develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide /
Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide About This Book Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development Develop a set of practical Ma...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham, UK :
Packt Publishing,
2016.
|
Schriftenreihe: | Community experience distilled.
|
Schlagworte: | |
Online-Zugang: | DE-862 DE-863 |
Zusammenfassung: | Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide About This Book Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development Develop a set of practical Machine Learning applications that can be implemented in real-life projects A comprehensive, project-based guide to improve and refine your predictive models for practical implementation Who This Book Is For If you are a data scientist, a data analyst, or an R and SPSS user with a good understanding of machine learning concepts, algorithms, and techniques, then this is the book for you. Some basic understanding of Spark and its core elements and application is required. What You Will Learn Set up Apache Spark for machine learning and discover its impressive processing power Combine Spark and R to unlock detailed business insights essential for decision making Build machine learning systems with Spark that can detect fraud and analyze financial risks Build predictive models focusing on customer scoring and service ranking Build a recommendation systems using SPSS on Apache Spark Tackle parallel computing and find out how it can support your machine learning projects Turn open data and communication data into actionable insights by making use of various forms of machine learning In Detail There's a reason why Apache Spark has become one of the most popular tools in Machine Learning ? its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers. Style and approach This book offers a step-by-step approach to setting up Apache Spark, and use other analytical tools with it to process Big Data and build machine learning pr... |
Beschreibung: | Includes index. |
Beschreibung: | 1 online resource : illustrations. |
ISBN: | 9781785887789 1785887785 178588039X 9781785880391 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-ocn952135851 | ||
003 | OCoLC | ||
005 | 20250103110447.0 | ||
006 | m o d | ||
007 | cr unu|||||||| | ||
008 | 160623s2016 enka o 001 0 eng d | ||
040 | |a UMI |b eng |e rda |e pn |c UMI |d N$T |d OCLCF |d DEBBG |d DEBSZ |d CEF |d NLE |d UKMGB |d ZCU |d AGLDB |d IGB |d UKAHL |d CZL |d OCLCO |d OCLCQ |d OCLCO |d OCLCL |d OCLCQ |d DXU | ||
015 | |a GBB6G3510 |2 bnb | ||
016 | 7 | |a 018010637 |2 Uk | |
020 | |a 9781785887789 |q (electronic bk.) | ||
020 | |a 1785887785 |q (electronic bk.) | ||
020 | |z 9781785880391 | ||
020 | |a 178588039X | ||
020 | |a 9781785880391 | ||
035 | |a (OCoLC)952135851 | ||
037 | |a CL0500000750 |b Safari Books Online | ||
050 | 4 | |a Q325.5 | |
072 | 7 | |a COM |x 000000 |2 bisacsh | |
082 | 7 | |a 006.31 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Liu, Alex, |e author. | |
245 | 1 | 0 | |a Apache Spark machine learning blueprints : |b develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / |c Alex Liu. |
264 | 1 | |a Birmingham, UK : |b Packt Publishing, |c 2016. | |
300 | |a 1 online resource : |b illustrations. | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
490 | 1 | |a Community experience distilled | |
588 | |a Description based on online resource; title from cover (Safari, viewed June 22, 2016). | ||
500 | |a Includes index. | ||
520 | |a Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide About This Book Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development Develop a set of practical Machine Learning applications that can be implemented in real-life projects A comprehensive, project-based guide to improve and refine your predictive models for practical implementation Who This Book Is For If you are a data scientist, a data analyst, or an R and SPSS user with a good understanding of machine learning concepts, algorithms, and techniques, then this is the book for you. Some basic understanding of Spark and its core elements and application is required. What You Will Learn Set up Apache Spark for machine learning and discover its impressive processing power Combine Spark and R to unlock detailed business insights essential for decision making Build machine learning systems with Spark that can detect fraud and analyze financial risks Build predictive models focusing on customer scoring and service ranking Build a recommendation systems using SPSS on Apache Spark Tackle parallel computing and find out how it can support your machine learning projects Turn open data and communication data into actionable insights by making use of various forms of machine learning In Detail There's a reason why Apache Spark has become one of the most popular tools in Machine Learning ? its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers. Style and approach This book offers a step-by-step approach to setting up Apache Spark, and use other analytical tools with it to process Big Data and build machine learning pr... | ||
505 | 0 | |a Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Table of Contents -- Preface -- Chapter 1: Spark for Machine Learning -- Spark overview and Spark advantages -- Spark overview -- Spark advantages -- Spark computing for machine learning -- Machine learning algorithms -- MLlib -- Other ML libraries -- Spark RDD and dataframes -- Spark RDD -- Spark dataframes -- Dataframes API for R -- ML frameworks, RM4Es and Spark computing -- ML frameworks -- RM4Es -- The Spark computing framework -- ML workflows and Spark pipelines -- ML as a step-by-step workflow -- ML workflow examples -- Spark notebooks -- Notebook approach for ML -- Step 1: Getting the software ready -- Step 2: Installing the Knitr package -- Step 3: Creating a simple report -- Spark notebooks -- Summary -- Chapter 2: Data Preparation for Spark ML -- Accessing and loading datasets -- Accessing publicly available datasets -- Loading datasets into Spark -- Exploring and visualizing datasets -- Data cleaning -- Dealing with data incompleteness -- Data cleaning in Spark -- Data cleaning made easy -- Identity matching -- Identity issues -- Identity matching on Spark -- Entity resolution -- Short string comparison -- Long string comparison -- Record deduplication -- Identity matching made better -- Crowdsourced deduplication -- Configuring the crowd -- Using the crowd -- Dataset reorganizing -- Dataset reorganizing tasks -- Dataset reorganizing with Spark SQL -- Dataset reorganizing with R on Spark -- Dataset joining -- Dataset joining and its tool -- the Spark SQL -- Dataset joining in Spark -- Dataset joining with the R data table package -- Feature extraction -- Feature development challenges -- Feature development with Spark MLlib -- Feature development with R -- Repeatability and automation -- Dataset preprocessing workflows. | |
505 | 8 | |a Spark pipelines for dataset preprocessing -- Dataset preprocessing automation -- Summary -- Chapter 3: A Holistic View on Spark -- Spark for a holistic view -- The use case -- Fast and easy computing -- Methods for a holistic view -- Regression modeling -- The SEM approach -- Decision trees -- Feature preparation -- PCA -- Grouping by category to use subject knowledge -- Feature selection -- Model estimation -- MLlib implementation -- The R notebooks' implementation -- Model evaluation -- Quick evaluations -- RMSE -- ROC curves -- Results explanation -- Impact assessments -- Deployment -- Dashboard -- Rules -- Summary -- Chapter 4: Fraud Detection on Spark -- Spark for fraud detection -- The use case -- Distributed computing -- Methods for fraud detection -- Random forest -- Decision trees -- Feature preparation -- Feature extraction from LogFile -- Data merging -- Model estimation -- MLlib implementation -- R notebooks implementation -- Model evaluation -- A quick evaluation -- Confusion matrix and false positive ratios -- Results explanation -- Big influencers and their impacts -- Deploying fraud detection -- Rules -- Scoring -- Summary -- Chapter 5: Risk Scoring on Spark -- Spark for risk scoring -- The use case -- Apache Spark notebooks -- Methods of risk scoring -- Logistic regression -- Preparing coding in R -- Random forest and decision trees -- Preparing coding -- Data and feature preparation -- OpenRefine -- Model estimation -- The DataScientistWorkbench for R notebooks -- R notebooks implementation -- Model evaluation -- Confusion matrix -- ROC -- Kolmogorov-Smirnov -- Results explanation -- Big influencers and their impacts -- Deployment -- Scoring -- Summary -- Chapter 6: Churn Prediction on Spark -- Spark for churn prediction -- The use case -- Spark computing -- Methods for churn prediction -- Regression models. | |
505 | 8 | |a Decision trees and Random forest -- Feature preparation -- Feature extraction -- Feature selection -- Model estimation -- Spark implementation with MLlib -- Model evaluation -- Results explanation -- Calculating the impact of interventions -- Deployment -- Scoring -- Intervention recommendations -- Summary -- Chapter 7: Recommendations on Spark -- Apache Spark for a recommendation engine -- The use case -- SPSS on Spark -- Methods for recommendation -- Collaborative filtering -- Preparing coding -- Data treatment with SPSS -- Missing data nodes on SPSS modeler -- Model estimation -- SPSS on Spark -- the SPSS Analytics server -- Model evaluation -- Recommendation deployment -- Summary -- Chapter 8: Learning Analytics on Spark -- Spark for attrition prediction -- The use case -- Spark computing -- Methods of attrition prediction -- Regression models -- About regression -- Preparing for coding -- Decision trees -- Preparing for coding -- Feature preparation -- Feature development -- Feature selection -- Principal components analysis -- ML feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Model evaluation -- A quick evaluation -- The confusion matrix and error ratios -- Results explanation -- Calculating the impact of interventions -- Calculating the impact of main causes -- Deployment -- Rules -- Scoring -- Summary -- Chapter 9: City Analytics on Spark -- Spark for service forecasting -- The use case -- Spark computing -- Methods of service forecasting -- Regression models -- About regression -- Preparing for coding -- Time series modeling -- About time series -- Preparing for coding -- Data and feature preparation -- Data merging -- Feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Spark implementation with the R notebook -- Model evaluation. | |
505 | 8 | |a RMSE calculation with MLlib -- RMSE calculation with R -- Explanations of the results -- Biggest influencers -- Visualizing trends -- The rules of sending out alerts -- Scores to rank city zones -- Summary -- Chapter 10: Learning Telco Data on Spark -- Spark for using Telco Data -- The use case -- Spark computing -- Methods for learning from Telco Data -- Descriptive statistics and visualization -- Linear and logistic regression models -- Decision tree and random forest -- Data and feature development -- Data reorganizing -- Feature development and selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Confusion matrix and error ratios with MLlib and R -- Results explanation -- Descriptive statistics and visualizations -- Biggest influencers -- Special insights -- Visualizing trends -- Model deployment -- Rules to send out alerts -- Scores subscribers for churn and for Call Center calls -- Scores subscribers for purchase propensity -- Summary -- Chapter 11: Modeling Open Data on Spark -- Spark for learning from open data -- The use case -- Spark computing -- Methods for scoring and ranking -- Cluster analysis -- Principal component analysis -- Regression models -- Score resembling -- Data and feature preparation -- Data cleaning -- Data merging -- Feature development -- Feature selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Results explanation -- Comparing ranks -- Biggest influencers -- Deployment -- Rules for sending out alerts -- Scores for ranking school districts -- Summary -- Index. | |
630 | 0 | 0 | |a Spark (Electronic resource : Apache Software Foundation) |0 http://id.loc.gov/authorities/names/no2015027445 |
630 | 0 | 7 | |a Spark (Electronic resource : Apache Software Foundation) |2 fast |
650 | 0 | |a Machine learning. |0 http://id.loc.gov/authorities/subjects/sh85079324 | |
650 | 0 | |a Big data. |0 http://id.loc.gov/authorities/subjects/sh2012003227 | |
650 | 0 | |a Information retrieval. |0 http://id.loc.gov/authorities/subjects/sh85066148 | |
650 | 6 | |a Apprentissage automatique. | |
650 | 6 | |a Données volumineuses. | |
650 | 6 | |a Recherche de l'information. | |
650 | 7 | |a information retrieval. |2 aat | |
650 | 7 | |a COMPUTERS / General |2 bisacsh | |
650 | 7 | |a Big data |2 fast | |
650 | 7 | |a Information retrieval |2 fast | |
650 | 7 | |a Machine learning |2 fast | |
758 | |i has work: |a Apache Spark machine learning blueprints (Text) |1 https://id.oclc.org/worldcat/entity/E39PCYkPDgFyp4fFMHvQyc7dkP |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
830 | 0 | |a Community experience distilled. |0 http://id.loc.gov/authorities/names/no2011030603 | |
966 | 4 | 0 | |l DE-862 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1243153 |3 Volltext |
966 | 4 | 0 | |l DE-863 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1243153 |3 Volltext |
938 | |a Askews and Holts Library Services |b ASKH |n AH30741106 | ||
938 | |a EBSCOhost |b EBSC |n 1243153 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-862 | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-ocn952135851 |
---|---|
_version_ | 1829095072237355008 |
adam_text | |
any_adam_object | |
author | Liu, Alex |
author_facet | Liu, Alex |
author_role | aut |
author_sort | Liu, Alex |
author_variant | a l al |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | Q325 |
callnumber-raw | Q325.5 |
callnumber-search | Q325.5 |
callnumber-sort | Q 3325.5 |
callnumber-subject | Q - General Science |
collection | ZDB-4-EBA |
contents | Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Table of Contents -- Preface -- Chapter 1: Spark for Machine Learning -- Spark overview and Spark advantages -- Spark overview -- Spark advantages -- Spark computing for machine learning -- Machine learning algorithms -- MLlib -- Other ML libraries -- Spark RDD and dataframes -- Spark RDD -- Spark dataframes -- Dataframes API for R -- ML frameworks, RM4Es and Spark computing -- ML frameworks -- RM4Es -- The Spark computing framework -- ML workflows and Spark pipelines -- ML as a step-by-step workflow -- ML workflow examples -- Spark notebooks -- Notebook approach for ML -- Step 1: Getting the software ready -- Step 2: Installing the Knitr package -- Step 3: Creating a simple report -- Spark notebooks -- Summary -- Chapter 2: Data Preparation for Spark ML -- Accessing and loading datasets -- Accessing publicly available datasets -- Loading datasets into Spark -- Exploring and visualizing datasets -- Data cleaning -- Dealing with data incompleteness -- Data cleaning in Spark -- Data cleaning made easy -- Identity matching -- Identity issues -- Identity matching on Spark -- Entity resolution -- Short string comparison -- Long string comparison -- Record deduplication -- Identity matching made better -- Crowdsourced deduplication -- Configuring the crowd -- Using the crowd -- Dataset reorganizing -- Dataset reorganizing tasks -- Dataset reorganizing with Spark SQL -- Dataset reorganizing with R on Spark -- Dataset joining -- Dataset joining and its tool -- the Spark SQL -- Dataset joining in Spark -- Dataset joining with the R data table package -- Feature extraction -- Feature development challenges -- Feature development with Spark MLlib -- Feature development with R -- Repeatability and automation -- Dataset preprocessing workflows. Spark pipelines for dataset preprocessing -- Dataset preprocessing automation -- Summary -- Chapter 3: A Holistic View on Spark -- Spark for a holistic view -- The use case -- Fast and easy computing -- Methods for a holistic view -- Regression modeling -- The SEM approach -- Decision trees -- Feature preparation -- PCA -- Grouping by category to use subject knowledge -- Feature selection -- Model estimation -- MLlib implementation -- The R notebooks' implementation -- Model evaluation -- Quick evaluations -- RMSE -- ROC curves -- Results explanation -- Impact assessments -- Deployment -- Dashboard -- Rules -- Summary -- Chapter 4: Fraud Detection on Spark -- Spark for fraud detection -- The use case -- Distributed computing -- Methods for fraud detection -- Random forest -- Decision trees -- Feature preparation -- Feature extraction from LogFile -- Data merging -- Model estimation -- MLlib implementation -- R notebooks implementation -- Model evaluation -- A quick evaluation -- Confusion matrix and false positive ratios -- Results explanation -- Big influencers and their impacts -- Deploying fraud detection -- Rules -- Scoring -- Summary -- Chapter 5: Risk Scoring on Spark -- Spark for risk scoring -- The use case -- Apache Spark notebooks -- Methods of risk scoring -- Logistic regression -- Preparing coding in R -- Random forest and decision trees -- Preparing coding -- Data and feature preparation -- OpenRefine -- Model estimation -- The DataScientistWorkbench for R notebooks -- R notebooks implementation -- Model evaluation -- Confusion matrix -- ROC -- Kolmogorov-Smirnov -- Results explanation -- Big influencers and their impacts -- Deployment -- Scoring -- Summary -- Chapter 6: Churn Prediction on Spark -- Spark for churn prediction -- The use case -- Spark computing -- Methods for churn prediction -- Regression models. Decision trees and Random forest -- Feature preparation -- Feature extraction -- Feature selection -- Model estimation -- Spark implementation with MLlib -- Model evaluation -- Results explanation -- Calculating the impact of interventions -- Deployment -- Scoring -- Intervention recommendations -- Summary -- Chapter 7: Recommendations on Spark -- Apache Spark for a recommendation engine -- The use case -- SPSS on Spark -- Methods for recommendation -- Collaborative filtering -- Preparing coding -- Data treatment with SPSS -- Missing data nodes on SPSS modeler -- Model estimation -- SPSS on Spark -- the SPSS Analytics server -- Model evaluation -- Recommendation deployment -- Summary -- Chapter 8: Learning Analytics on Spark -- Spark for attrition prediction -- The use case -- Spark computing -- Methods of attrition prediction -- Regression models -- About regression -- Preparing for coding -- Decision trees -- Preparing for coding -- Feature preparation -- Feature development -- Feature selection -- Principal components analysis -- ML feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Model evaluation -- A quick evaluation -- The confusion matrix and error ratios -- Results explanation -- Calculating the impact of interventions -- Calculating the impact of main causes -- Deployment -- Rules -- Scoring -- Summary -- Chapter 9: City Analytics on Spark -- Spark for service forecasting -- The use case -- Spark computing -- Methods of service forecasting -- Regression models -- About regression -- Preparing for coding -- Time series modeling -- About time series -- Preparing for coding -- Data and feature preparation -- Data merging -- Feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Spark implementation with the R notebook -- Model evaluation. RMSE calculation with MLlib -- RMSE calculation with R -- Explanations of the results -- Biggest influencers -- Visualizing trends -- The rules of sending out alerts -- Scores to rank city zones -- Summary -- Chapter 10: Learning Telco Data on Spark -- Spark for using Telco Data -- The use case -- Spark computing -- Methods for learning from Telco Data -- Descriptive statistics and visualization -- Linear and logistic regression models -- Decision tree and random forest -- Data and feature development -- Data reorganizing -- Feature development and selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Confusion matrix and error ratios with MLlib and R -- Results explanation -- Descriptive statistics and visualizations -- Biggest influencers -- Special insights -- Visualizing trends -- Model deployment -- Rules to send out alerts -- Scores subscribers for churn and for Call Center calls -- Scores subscribers for purchase propensity -- Summary -- Chapter 11: Modeling Open Data on Spark -- Spark for learning from open data -- The use case -- Spark computing -- Methods for scoring and ranking -- Cluster analysis -- Principal component analysis -- Regression models -- Score resembling -- Data and feature preparation -- Data cleaning -- Data merging -- Feature development -- Feature selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Results explanation -- Comparing ranks -- Biggest influencers -- Deployment -- Rules for sending out alerts -- Scores for ranking school districts -- Summary -- Index. |
ctrlnum | (OCoLC)952135851 |
dewey-full | 006.31 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.31 |
dewey-search | 006.31 |
dewey-sort | 16.31 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>12589cam a2200673 i 4500</leader><controlfield tag="001">ZDB-4-EBA-ocn952135851</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20250103110447.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr unu||||||||</controlfield><controlfield tag="008">160623s2016 enka o 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">UMI</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">UMI</subfield><subfield code="d">N$T</subfield><subfield code="d">OCLCF</subfield><subfield code="d">DEBBG</subfield><subfield code="d">DEBSZ</subfield><subfield code="d">CEF</subfield><subfield code="d">NLE</subfield><subfield code="d">UKMGB</subfield><subfield code="d">ZCU</subfield><subfield code="d">AGLDB</subfield><subfield code="d">IGB</subfield><subfield code="d">UKAHL</subfield><subfield code="d">CZL</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">DXU</subfield></datafield><datafield tag="015" ind1=" " ind2=" "><subfield code="a">GBB6G3510</subfield><subfield code="2">bnb</subfield></datafield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">018010637</subfield><subfield code="2">Uk</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781785887789</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1785887785</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781785880391</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">178588039X</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781785880391</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)952135851</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">CL0500000750</subfield><subfield code="b">Safari Books Online</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q325.5</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">000000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.31</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Alex,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Apache Spark machine learning blueprints :</subfield><subfield code="b">develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide /</subfield><subfield code="c">Alex Liu.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham, UK :</subfield><subfield code="b">Packt Publishing,</subfield><subfield code="c">2016.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource :</subfield><subfield code="b">illustrations.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Community experience distilled</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on online resource; title from cover (Safari, viewed June 22, 2016).</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes index.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide About This Book Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development Develop a set of practical Machine Learning applications that can be implemented in real-life projects A comprehensive, project-based guide to improve and refine your predictive models for practical implementation Who This Book Is For If you are a data scientist, a data analyst, or an R and SPSS user with a good understanding of machine learning concepts, algorithms, and techniques, then this is the book for you. Some basic understanding of Spark and its core elements and application is required. What You Will Learn Set up Apache Spark for machine learning and discover its impressive processing power Combine Spark and R to unlock detailed business insights essential for decision making Build machine learning systems with Spark that can detect fraud and analyze financial risks Build predictive models focusing on customer scoring and service ranking Build a recommendation systems using SPSS on Apache Spark Tackle parallel computing and find out how it can support your machine learning projects Turn open data and communication data into actionable insights by making use of various forms of machine learning In Detail There's a reason why Apache Spark has become one of the most popular tools in Machine Learning ? its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers. Style and approach This book offers a step-by-step approach to setting up Apache Spark, and use other analytical tools with it to process Big Data and build machine learning pr...</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Table of Contents -- Preface -- Chapter 1: Spark for Machine Learning -- Spark overview and Spark advantages -- Spark overview -- Spark advantages -- Spark computing for machine learning -- Machine learning algorithms -- MLlib -- Other ML libraries -- Spark RDD and dataframes -- Spark RDD -- Spark dataframes -- Dataframes API for R -- ML frameworks, RM4Es and Spark computing -- ML frameworks -- RM4Es -- The Spark computing framework -- ML workflows and Spark pipelines -- ML as a step-by-step workflow -- ML workflow examples -- Spark notebooks -- Notebook approach for ML -- Step 1: Getting the software ready -- Step 2: Installing the Knitr package -- Step 3: Creating a simple report -- Spark notebooks -- Summary -- Chapter 2: Data Preparation for Spark ML -- Accessing and loading datasets -- Accessing publicly available datasets -- Loading datasets into Spark -- Exploring and visualizing datasets -- Data cleaning -- Dealing with data incompleteness -- Data cleaning in Spark -- Data cleaning made easy -- Identity matching -- Identity issues -- Identity matching on Spark -- Entity resolution -- Short string comparison -- Long string comparison -- Record deduplication -- Identity matching made better -- Crowdsourced deduplication -- Configuring the crowd -- Using the crowd -- Dataset reorganizing -- Dataset reorganizing tasks -- Dataset reorganizing with Spark SQL -- Dataset reorganizing with R on Spark -- Dataset joining -- Dataset joining and its tool -- the Spark SQL -- Dataset joining in Spark -- Dataset joining with the R data table package -- Feature extraction -- Feature development challenges -- Feature development with Spark MLlib -- Feature development with R -- Repeatability and automation -- Dataset preprocessing workflows.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Spark pipelines for dataset preprocessing -- Dataset preprocessing automation -- Summary -- Chapter 3: A Holistic View on Spark -- Spark for a holistic view -- The use case -- Fast and easy computing -- Methods for a holistic view -- Regression modeling -- The SEM approach -- Decision trees -- Feature preparation -- PCA -- Grouping by category to use subject knowledge -- Feature selection -- Model estimation -- MLlib implementation -- The R notebooks' implementation -- Model evaluation -- Quick evaluations -- RMSE -- ROC curves -- Results explanation -- Impact assessments -- Deployment -- Dashboard -- Rules -- Summary -- Chapter 4: Fraud Detection on Spark -- Spark for fraud detection -- The use case -- Distributed computing -- Methods for fraud detection -- Random forest -- Decision trees -- Feature preparation -- Feature extraction from LogFile -- Data merging -- Model estimation -- MLlib implementation -- R notebooks implementation -- Model evaluation -- A quick evaluation -- Confusion matrix and false positive ratios -- Results explanation -- Big influencers and their impacts -- Deploying fraud detection -- Rules -- Scoring -- Summary -- Chapter 5: Risk Scoring on Spark -- Spark for risk scoring -- The use case -- Apache Spark notebooks -- Methods of risk scoring -- Logistic regression -- Preparing coding in R -- Random forest and decision trees -- Preparing coding -- Data and feature preparation -- OpenRefine -- Model estimation -- The DataScientistWorkbench for R notebooks -- R notebooks implementation -- Model evaluation -- Confusion matrix -- ROC -- Kolmogorov-Smirnov -- Results explanation -- Big influencers and their impacts -- Deployment -- Scoring -- Summary -- Chapter 6: Churn Prediction on Spark -- Spark for churn prediction -- The use case -- Spark computing -- Methods for churn prediction -- Regression models.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Decision trees and Random forest -- Feature preparation -- Feature extraction -- Feature selection -- Model estimation -- Spark implementation with MLlib -- Model evaluation -- Results explanation -- Calculating the impact of interventions -- Deployment -- Scoring -- Intervention recommendations -- Summary -- Chapter 7: Recommendations on Spark -- Apache Spark for a recommendation engine -- The use case -- SPSS on Spark -- Methods for recommendation -- Collaborative filtering -- Preparing coding -- Data treatment with SPSS -- Missing data nodes on SPSS modeler -- Model estimation -- SPSS on Spark -- the SPSS Analytics server -- Model evaluation -- Recommendation deployment -- Summary -- Chapter 8: Learning Analytics on Spark -- Spark for attrition prediction -- The use case -- Spark computing -- Methods of attrition prediction -- Regression models -- About regression -- Preparing for coding -- Decision trees -- Preparing for coding -- Feature preparation -- Feature development -- Feature selection -- Principal components analysis -- ML feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Model evaluation -- A quick evaluation -- The confusion matrix and error ratios -- Results explanation -- Calculating the impact of interventions -- Calculating the impact of main causes -- Deployment -- Rules -- Scoring -- Summary -- Chapter 9: City Analytics on Spark -- Spark for service forecasting -- The use case -- Spark computing -- Methods of service forecasting -- Regression models -- About regression -- Preparing for coding -- Time series modeling -- About time series -- Preparing for coding -- Data and feature preparation -- Data merging -- Feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Spark implementation with the R notebook -- Model evaluation.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">RMSE calculation with MLlib -- RMSE calculation with R -- Explanations of the results -- Biggest influencers -- Visualizing trends -- The rules of sending out alerts -- Scores to rank city zones -- Summary -- Chapter 10: Learning Telco Data on Spark -- Spark for using Telco Data -- The use case -- Spark computing -- Methods for learning from Telco Data -- Descriptive statistics and visualization -- Linear and logistic regression models -- Decision tree and random forest -- Data and feature development -- Data reorganizing -- Feature development and selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Confusion matrix and error ratios with MLlib and R -- Results explanation -- Descriptive statistics and visualizations -- Biggest influencers -- Special insights -- Visualizing trends -- Model deployment -- Rules to send out alerts -- Scores subscribers for churn and for Call Center calls -- Scores subscribers for purchase propensity -- Summary -- Chapter 11: Modeling Open Data on Spark -- Spark for learning from open data -- The use case -- Spark computing -- Methods for scoring and ranking -- Cluster analysis -- Principal component analysis -- Regression models -- Score resembling -- Data and feature preparation -- Data cleaning -- Data merging -- Feature development -- Feature selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Results explanation -- Comparing ranks -- Biggest influencers -- Deployment -- Rules for sending out alerts -- Scores for ranking school districts -- Summary -- Index.</subfield></datafield><datafield tag="630" ind1="0" ind2="0"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="0">http://id.loc.gov/authorities/names/no2015027445</subfield></datafield><datafield tag="630" ind1="0" ind2="7"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Machine learning.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85079324</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Big data.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh2012003227</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Information retrieval.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85066148</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Apprentissage automatique.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Données volumineuses.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Recherche de l'information.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">information retrieval.</subfield><subfield code="2">aat</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS / General</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Big data</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Information retrieval</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Machine learning</subfield><subfield code="2">fast</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">Apache Spark machine learning blueprints (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCYkPDgFyp4fFMHvQyc7dkP</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Community experience distilled.</subfield><subfield code="0">http://id.loc.gov/authorities/names/no2011030603</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-862</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1243153</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="4" ind2="0"><subfield code="l">DE-863</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1243153</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH30741106</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1243153</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-862</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-ocn952135851 |
illustrated | Illustrated |
indexdate | 2025-04-11T08:43:12Z |
institution | BVB |
isbn | 9781785887789 1785887785 178588039X 9781785880391 |
language | English |
oclc_num | 952135851 |
open_access_boolean | |
owner | MAIN DE-862 DE-BY-FWS DE-863 DE-BY-FWS |
owner_facet | MAIN DE-862 DE-BY-FWS DE-863 DE-BY-FWS |
physical | 1 online resource : illustrations. |
psigel | ZDB-4-EBA FWS_PDA_EBA ZDB-4-EBA |
publishDate | 2016 |
publishDateSearch | 2016 |
publishDateSort | 2016 |
publisher | Packt Publishing, |
record_format | marc |
series | Community experience distilled. |
series2 | Community experience distilled |
spelling | Liu, Alex, author. Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / Alex Liu. Birmingham, UK : Packt Publishing, 2016. 1 online resource : illustrations. text txt rdacontent computer c rdamedia online resource cr rdacarrier Community experience distilled Description based on online resource; title from cover (Safari, viewed June 22, 2016). Includes index. Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide About This Book Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development Develop a set of practical Machine Learning applications that can be implemented in real-life projects A comprehensive, project-based guide to improve and refine your predictive models for practical implementation Who This Book Is For If you are a data scientist, a data analyst, or an R and SPSS user with a good understanding of machine learning concepts, algorithms, and techniques, then this is the book for you. Some basic understanding of Spark and its core elements and application is required. What You Will Learn Set up Apache Spark for machine learning and discover its impressive processing power Combine Spark and R to unlock detailed business insights essential for decision making Build machine learning systems with Spark that can detect fraud and analyze financial risks Build predictive models focusing on customer scoring and service ranking Build a recommendation systems using SPSS on Apache Spark Tackle parallel computing and find out how it can support your machine learning projects Turn open data and communication data into actionable insights by making use of various forms of machine learning In Detail There's a reason why Apache Spark has become one of the most popular tools in Machine Learning ? its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers. Style and approach This book offers a step-by-step approach to setting up Apache Spark, and use other analytical tools with it to process Big Data and build machine learning pr... Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Table of Contents -- Preface -- Chapter 1: Spark for Machine Learning -- Spark overview and Spark advantages -- Spark overview -- Spark advantages -- Spark computing for machine learning -- Machine learning algorithms -- MLlib -- Other ML libraries -- Spark RDD and dataframes -- Spark RDD -- Spark dataframes -- Dataframes API for R -- ML frameworks, RM4Es and Spark computing -- ML frameworks -- RM4Es -- The Spark computing framework -- ML workflows and Spark pipelines -- ML as a step-by-step workflow -- ML workflow examples -- Spark notebooks -- Notebook approach for ML -- Step 1: Getting the software ready -- Step 2: Installing the Knitr package -- Step 3: Creating a simple report -- Spark notebooks -- Summary -- Chapter 2: Data Preparation for Spark ML -- Accessing and loading datasets -- Accessing publicly available datasets -- Loading datasets into Spark -- Exploring and visualizing datasets -- Data cleaning -- Dealing with data incompleteness -- Data cleaning in Spark -- Data cleaning made easy -- Identity matching -- Identity issues -- Identity matching on Spark -- Entity resolution -- Short string comparison -- Long string comparison -- Record deduplication -- Identity matching made better -- Crowdsourced deduplication -- Configuring the crowd -- Using the crowd -- Dataset reorganizing -- Dataset reorganizing tasks -- Dataset reorganizing with Spark SQL -- Dataset reorganizing with R on Spark -- Dataset joining -- Dataset joining and its tool -- the Spark SQL -- Dataset joining in Spark -- Dataset joining with the R data table package -- Feature extraction -- Feature development challenges -- Feature development with Spark MLlib -- Feature development with R -- Repeatability and automation -- Dataset preprocessing workflows. Spark pipelines for dataset preprocessing -- Dataset preprocessing automation -- Summary -- Chapter 3: A Holistic View on Spark -- Spark for a holistic view -- The use case -- Fast and easy computing -- Methods for a holistic view -- Regression modeling -- The SEM approach -- Decision trees -- Feature preparation -- PCA -- Grouping by category to use subject knowledge -- Feature selection -- Model estimation -- MLlib implementation -- The R notebooks' implementation -- Model evaluation -- Quick evaluations -- RMSE -- ROC curves -- Results explanation -- Impact assessments -- Deployment -- Dashboard -- Rules -- Summary -- Chapter 4: Fraud Detection on Spark -- Spark for fraud detection -- The use case -- Distributed computing -- Methods for fraud detection -- Random forest -- Decision trees -- Feature preparation -- Feature extraction from LogFile -- Data merging -- Model estimation -- MLlib implementation -- R notebooks implementation -- Model evaluation -- A quick evaluation -- Confusion matrix and false positive ratios -- Results explanation -- Big influencers and their impacts -- Deploying fraud detection -- Rules -- Scoring -- Summary -- Chapter 5: Risk Scoring on Spark -- Spark for risk scoring -- The use case -- Apache Spark notebooks -- Methods of risk scoring -- Logistic regression -- Preparing coding in R -- Random forest and decision trees -- Preparing coding -- Data and feature preparation -- OpenRefine -- Model estimation -- The DataScientistWorkbench for R notebooks -- R notebooks implementation -- Model evaluation -- Confusion matrix -- ROC -- Kolmogorov-Smirnov -- Results explanation -- Big influencers and their impacts -- Deployment -- Scoring -- Summary -- Chapter 6: Churn Prediction on Spark -- Spark for churn prediction -- The use case -- Spark computing -- Methods for churn prediction -- Regression models. Decision trees and Random forest -- Feature preparation -- Feature extraction -- Feature selection -- Model estimation -- Spark implementation with MLlib -- Model evaluation -- Results explanation -- Calculating the impact of interventions -- Deployment -- Scoring -- Intervention recommendations -- Summary -- Chapter 7: Recommendations on Spark -- Apache Spark for a recommendation engine -- The use case -- SPSS on Spark -- Methods for recommendation -- Collaborative filtering -- Preparing coding -- Data treatment with SPSS -- Missing data nodes on SPSS modeler -- Model estimation -- SPSS on Spark -- the SPSS Analytics server -- Model evaluation -- Recommendation deployment -- Summary -- Chapter 8: Learning Analytics on Spark -- Spark for attrition prediction -- The use case -- Spark computing -- Methods of attrition prediction -- Regression models -- About regression -- Preparing for coding -- Decision trees -- Preparing for coding -- Feature preparation -- Feature development -- Feature selection -- Principal components analysis -- ML feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Model evaluation -- A quick evaluation -- The confusion matrix and error ratios -- Results explanation -- Calculating the impact of interventions -- Calculating the impact of main causes -- Deployment -- Rules -- Scoring -- Summary -- Chapter 9: City Analytics on Spark -- Spark for service forecasting -- The use case -- Spark computing -- Methods of service forecasting -- Regression models -- About regression -- Preparing for coding -- Time series modeling -- About time series -- Preparing for coding -- Data and feature preparation -- Data merging -- Feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Spark implementation with the R notebook -- Model evaluation. RMSE calculation with MLlib -- RMSE calculation with R -- Explanations of the results -- Biggest influencers -- Visualizing trends -- The rules of sending out alerts -- Scores to rank city zones -- Summary -- Chapter 10: Learning Telco Data on Spark -- Spark for using Telco Data -- The use case -- Spark computing -- Methods for learning from Telco Data -- Descriptive statistics and visualization -- Linear and logistic regression models -- Decision tree and random forest -- Data and feature development -- Data reorganizing -- Feature development and selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Confusion matrix and error ratios with MLlib and R -- Results explanation -- Descriptive statistics and visualizations -- Biggest influencers -- Special insights -- Visualizing trends -- Model deployment -- Rules to send out alerts -- Scores subscribers for churn and for Call Center calls -- Scores subscribers for purchase propensity -- Summary -- Chapter 11: Modeling Open Data on Spark -- Spark for learning from open data -- The use case -- Spark computing -- Methods for scoring and ranking -- Cluster analysis -- Principal component analysis -- Regression models -- Score resembling -- Data and feature preparation -- Data cleaning -- Data merging -- Feature development -- Feature selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Results explanation -- Comparing ranks -- Biggest influencers -- Deployment -- Rules for sending out alerts -- Scores for ranking school districts -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Information retrieval. http://id.loc.gov/authorities/subjects/sh85066148 Apprentissage automatique. Données volumineuses. Recherche de l'information. information retrieval. aat COMPUTERS / General bisacsh Big data fast Information retrieval fast Machine learning fast has work: Apache Spark machine learning blueprints (Text) https://id.oclc.org/worldcat/entity/E39PCYkPDgFyp4fFMHvQyc7dkP https://id.oclc.org/worldcat/ontology/hasWork Community experience distilled. http://id.loc.gov/authorities/names/no2011030603 |
spellingShingle | Liu, Alex Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / Community experience distilled. Cover -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Table of Contents -- Preface -- Chapter 1: Spark for Machine Learning -- Spark overview and Spark advantages -- Spark overview -- Spark advantages -- Spark computing for machine learning -- Machine learning algorithms -- MLlib -- Other ML libraries -- Spark RDD and dataframes -- Spark RDD -- Spark dataframes -- Dataframes API for R -- ML frameworks, RM4Es and Spark computing -- ML frameworks -- RM4Es -- The Spark computing framework -- ML workflows and Spark pipelines -- ML as a step-by-step workflow -- ML workflow examples -- Spark notebooks -- Notebook approach for ML -- Step 1: Getting the software ready -- Step 2: Installing the Knitr package -- Step 3: Creating a simple report -- Spark notebooks -- Summary -- Chapter 2: Data Preparation for Spark ML -- Accessing and loading datasets -- Accessing publicly available datasets -- Loading datasets into Spark -- Exploring and visualizing datasets -- Data cleaning -- Dealing with data incompleteness -- Data cleaning in Spark -- Data cleaning made easy -- Identity matching -- Identity issues -- Identity matching on Spark -- Entity resolution -- Short string comparison -- Long string comparison -- Record deduplication -- Identity matching made better -- Crowdsourced deduplication -- Configuring the crowd -- Using the crowd -- Dataset reorganizing -- Dataset reorganizing tasks -- Dataset reorganizing with Spark SQL -- Dataset reorganizing with R on Spark -- Dataset joining -- Dataset joining and its tool -- the Spark SQL -- Dataset joining in Spark -- Dataset joining with the R data table package -- Feature extraction -- Feature development challenges -- Feature development with Spark MLlib -- Feature development with R -- Repeatability and automation -- Dataset preprocessing workflows. Spark pipelines for dataset preprocessing -- Dataset preprocessing automation -- Summary -- Chapter 3: A Holistic View on Spark -- Spark for a holistic view -- The use case -- Fast and easy computing -- Methods for a holistic view -- Regression modeling -- The SEM approach -- Decision trees -- Feature preparation -- PCA -- Grouping by category to use subject knowledge -- Feature selection -- Model estimation -- MLlib implementation -- The R notebooks' implementation -- Model evaluation -- Quick evaluations -- RMSE -- ROC curves -- Results explanation -- Impact assessments -- Deployment -- Dashboard -- Rules -- Summary -- Chapter 4: Fraud Detection on Spark -- Spark for fraud detection -- The use case -- Distributed computing -- Methods for fraud detection -- Random forest -- Decision trees -- Feature preparation -- Feature extraction from LogFile -- Data merging -- Model estimation -- MLlib implementation -- R notebooks implementation -- Model evaluation -- A quick evaluation -- Confusion matrix and false positive ratios -- Results explanation -- Big influencers and their impacts -- Deploying fraud detection -- Rules -- Scoring -- Summary -- Chapter 5: Risk Scoring on Spark -- Spark for risk scoring -- The use case -- Apache Spark notebooks -- Methods of risk scoring -- Logistic regression -- Preparing coding in R -- Random forest and decision trees -- Preparing coding -- Data and feature preparation -- OpenRefine -- Model estimation -- The DataScientistWorkbench for R notebooks -- R notebooks implementation -- Model evaluation -- Confusion matrix -- ROC -- Kolmogorov-Smirnov -- Results explanation -- Big influencers and their impacts -- Deployment -- Scoring -- Summary -- Chapter 6: Churn Prediction on Spark -- Spark for churn prediction -- The use case -- Spark computing -- Methods for churn prediction -- Regression models. Decision trees and Random forest -- Feature preparation -- Feature extraction -- Feature selection -- Model estimation -- Spark implementation with MLlib -- Model evaluation -- Results explanation -- Calculating the impact of interventions -- Deployment -- Scoring -- Intervention recommendations -- Summary -- Chapter 7: Recommendations on Spark -- Apache Spark for a recommendation engine -- The use case -- SPSS on Spark -- Methods for recommendation -- Collaborative filtering -- Preparing coding -- Data treatment with SPSS -- Missing data nodes on SPSS modeler -- Model estimation -- SPSS on Spark -- the SPSS Analytics server -- Model evaluation -- Recommendation deployment -- Summary -- Chapter 8: Learning Analytics on Spark -- Spark for attrition prediction -- The use case -- Spark computing -- Methods of attrition prediction -- Regression models -- About regression -- Preparing for coding -- Decision trees -- Preparing for coding -- Feature preparation -- Feature development -- Feature selection -- Principal components analysis -- ML feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Model evaluation -- A quick evaluation -- The confusion matrix and error ratios -- Results explanation -- Calculating the impact of interventions -- Calculating the impact of main causes -- Deployment -- Rules -- Scoring -- Summary -- Chapter 9: City Analytics on Spark -- Spark for service forecasting -- The use case -- Spark computing -- Methods of service forecasting -- Regression models -- About regression -- Preparing for coding -- Time series modeling -- About time series -- Preparing for coding -- Data and feature preparation -- Data merging -- Feature selection -- Model estimation -- Spark implementation with the Zeppelin notebook -- Spark implementation with the R notebook -- Model evaluation. RMSE calculation with MLlib -- RMSE calculation with R -- Explanations of the results -- Biggest influencers -- Visualizing trends -- The rules of sending out alerts -- Scores to rank city zones -- Summary -- Chapter 10: Learning Telco Data on Spark -- Spark for using Telco Data -- The use case -- Spark computing -- Methods for learning from Telco Data -- Descriptive statistics and visualization -- Linear and logistic regression models -- Decision tree and random forest -- Data and feature development -- Data reorganizing -- Feature development and selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Confusion matrix and error ratios with MLlib and R -- Results explanation -- Descriptive statistics and visualizations -- Biggest influencers -- Special insights -- Visualizing trends -- Model deployment -- Rules to send out alerts -- Scores subscribers for churn and for Call Center calls -- Scores subscribers for purchase propensity -- Summary -- Chapter 11: Modeling Open Data on Spark -- Spark for learning from open data -- The use case -- Spark computing -- Methods for scoring and ranking -- Cluster analysis -- Principal component analysis -- Regression models -- Score resembling -- Data and feature preparation -- Data cleaning -- Data merging -- Feature development -- Feature selection -- Model estimation -- SPSS on Spark -- SPSS Analytics Server -- Model evaluation -- RMSE calculations with MLlib -- RMSE calculations with R -- Results explanation -- Comparing ranks -- Biggest influencers -- Deployment -- Rules for sending out alerts -- Scores for ranking school districts -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Information retrieval. http://id.loc.gov/authorities/subjects/sh85066148 Apprentissage automatique. Données volumineuses. Recherche de l'information. information retrieval. aat COMPUTERS / General bisacsh Big data fast Information retrieval fast Machine learning fast |
subject_GND | http://id.loc.gov/authorities/names/no2015027445 http://id.loc.gov/authorities/subjects/sh85079324 http://id.loc.gov/authorities/subjects/sh2012003227 http://id.loc.gov/authorities/subjects/sh85066148 |
title | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / |
title_auth | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / |
title_exact_search | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / |
title_full | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / Alex Liu. |
title_fullStr | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / Alex Liu. |
title_full_unstemmed | Apache Spark machine learning blueprints : develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / Alex Liu. |
title_short | Apache Spark machine learning blueprints : |
title_sort | apache spark machine learning blueprints develop a range of cutting edge machine learning projects with apache spark using this actionable guide |
title_sub | develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide / |
topic | Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Information retrieval. http://id.loc.gov/authorities/subjects/sh85066148 Apprentissage automatique. Données volumineuses. Recherche de l'information. information retrieval. aat COMPUTERS / General bisacsh Big data fast Information retrieval fast Machine learning fast |
topic_facet | Spark (Electronic resource : Apache Software Foundation) Machine learning. Big data. Information retrieval. Apprentissage automatique. Données volumineuses. Recherche de l'information. information retrieval. COMPUTERS / General Big data Information retrieval Machine learning |
work_keys_str_mv | AT liualex apachesparkmachinelearningblueprintsdeveloparangeofcuttingedgemachinelearningprojectswithapachesparkusingthisactionableguide |