Data Science for Marketing Analytics: A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python
This book on marketing analytics with Python will quickly get you up and running using practical data science and machine learning to improve your approach to marketing. You'll learn how to analyze sales, understand customer data, predict outcomes, and present conclusions with clear visualizati...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham
Packt Publishing, Limited
2021
|
Ausgabe: | 2nd ed |
Schlagworte: | |
Online-Zugang: | HWR01 |
Zusammenfassung: | This book on marketing analytics with Python will quickly get you up and running using practical data science and machine learning to improve your approach to marketing. You'll learn how to analyze sales, understand customer data, predict outcomes, and present conclusions with clear visualizations |
Beschreibung: | 1 Online-Ressource (637 Seiten) |
ISBN: | 9781800563889 |
Internformat
MARC
LEADER | 00000nmm a2200000 c 4500 | ||
---|---|---|---|
001 | BV048410403 | ||
003 | DE-604 | ||
005 | 20221025 | ||
007 | cr|uuu---uuuuu | ||
008 | 220816s2021 |||| o||u| ||||||eng d | ||
020 | |a 9781800563889 |q (electronic bk.) |9 9781800563889 | ||
035 | |a (ZDB-30-PQE)EBC6723225 | ||
035 | |a (ZDB-30-PAD)EBC6723225 | ||
035 | |a (ZDB-89-EBL)EBL6723225 | ||
035 | |a (OCoLC)1268111350 | ||
035 | |a (DE-599)BVBBV048410403 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-2070s | ||
100 | 1 | |a Baig, Mirza Rahim |e Verfasser |4 aut | |
245 | 1 | 0 | |a Data Science for Marketing Analytics |b A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
250 | |a 2nd ed | ||
264 | 1 | |a Birmingham |b Packt Publishing, Limited |c 2021 | |
264 | 4 | |c ©2021 | |
300 | |a 1 Online-Ressource (637 Seiten) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
505 | 8 | |a Cover -- FM -- Copyright -- Table of Contents -- Preface -- Chapter 1: Data Preparation and Cleaning -- Introduction -- Data Models and Structured Data -- pandas -- Importing and Exporting Data with pandas DataFrames -- Viewing and Inspecting Data in DataFrames -- Exercise 1.01: Loading Data Stored in a JSON File -- Exercise 1.02: Loading Data from Multiple Sources -- Structure of a pandas DataFrame and Series -- Data Manipulation -- Selecting and Filtering in pandas -- Creating DataFrames in Python -- Adding and Removing Attributes and Observations -- Combining Data -- Handling Missing Data -- Exercise 1.03: Combining DataFrames and Handling Missing Values -- Applying Functions and Operations on DataFrames -- Grouping Data -- Exercise 1.04: Applying Data Transformations -- Activity 1.01: Addressing Data Spilling -- Summary -- Chapter 2: Data Exploration and Visualization -- Introduction -- Identifying and Focusing on the Right Attributes -- The groupby( ) Function -- The unique( ) function -- The value_counts( ) function -- Exercise 2.01: Exploring the Attributes in Sales Data -- Fine Tuning Generated Insights -- Selecting and Renaming Attributes -- Reshaping the Data -- Exercise 2.02: Calculating Conversion Ratios for Website Ads. -- Pivot Tables -- Visualizing Data -- Exercise 2.03: Visualizing Data With pandas -- Visualization through Seaborn -- Visualization with Matplotlib -- Activity 2.01: Analyzing Advertisements -- Summary -- Chapter 3: Unsupervised Learning and Customer Segmentation -- Introduction -- Segmentation -- Exercise 3.01: Mall Customer Segmentation - Understanding the Data -- Approaches to Segmentation -- Traditional Segmentation Methods -- Exercise 3.02: Traditional Segmentation of Mall Customers -- Unsupervised Learning (Clustering) for Customer Segmentation -- Choosing Relevant Attributes (Segmentation Criteria) | |
505 | 8 | |a Standardizing Data -- Exercise 3.03: Standardizing Customer Data -- Calculating Distance -- Exercise 3.04: Calculating the Distance between Customers -- K-Means Clustering -- Exercise 3.05: K-Means Clustering on Mall Customers -- Understanding and Describing the Clusters -- Activity 3.01: Bank Customer Segmentation for Loan Campaign -- Clustering with High-Dimensional Data -- Exercise 3.06: Dealing with High-Dimensional Data -- Activity 3.02: Bank Customer Segmentation with Multiple Features -- Summary -- Chapter 4: Evaluating and Choosing the Best Segmentation Approach -- Introduction -- Choosing the Number of Clusters -- Exercise 4.01: Data Staging and Visualization -- Simple Visual Inspection to Choose the Optimal Number of Clusters -- Exercise 4.02: Choosing the Number of Clusters Based on Visual Inspection -- The Elbow Method with Sum of Squared Errors -- Exercise 4.03: Determining the Number of Clusters Using the Elbow Method -- Activity 4.01: Optimizing a Luxury Clothing Brand's Marketing Campaign Using Clustering -- More Clustering Techniques -- Mean-Shift Clustering -- Exercise 4.04: Mean-Shift Clustering on Mall Customers -- Benefits and Drawbacks of the Mean-Shift Technique -- k-modes and k-prototypes Clustering -- Exercise 4.05: Clustering Data Using the k-prototypes Method -- Evaluating Clustering -- Silhouette Score -- Exercise 4.06: Using Silhouette Score to Pick Optimal Number of Clusters -- Train and Test Split -- Exercise 4.07: Using a Train-Test Split to Evaluate Clustering Performance -- Activity 4.02: Evaluating Clustering on Customer Data -- The Role of Business in Cluster Evaluation -- Summary -- Chapter 5: Predicting Customer Revenue Using Linear Regression -- Introduction -- Regression Problems -- Exercise 5.01: Predicting Sales from Advertising Spend Using Linear Regression -- Feature Engineering for Regression | |
505 | 8 | |a Feature Creation -- Data Cleaning -- Exercise 5.02: Creating Features for Customer Revenue Prediction -- Assessing Features Using Visualizations and Correlations -- Exercise 5.03: Examining Relationships between Predictors and the Outcome -- Activity 5.01: Examining the Relationship between Store Location and Revenue -- Performing and Interpreting Linear Regression -- Exercise 5.04: Building a Linear Model Predicting Customer Spend -- Activity 5.02: Predicting Store Revenue Using Linear Regression -- Summary -- Chapter 6: More Tools and Techniques for Evaluating Regression Models -- Introduction -- Evaluating the Accuracy of a Regression Model -- Residuals and Errors -- Mean Absolute Error -- Root Mean Squared Error -- Exercise 6.01: Evaluating Regression Models of Location Revenue Using the MAE and RMSE -- Activity 6.01: Finding Important Variables for Predicting Responses to a Marketing Offer -- Using Recursive Feature Selection for Feature Elimination -- Exercise 6.02: Using RFE for Feature Selection -- Activity 6.02: Using RFE to Choose Features for Predicting Customer Spend -- Tree-Based Regression Models -- Random Forests -- Exercise 6.03: Using Tree-Based Regression Models to Capture Non-Linear Trends -- Activity 6.03: Building the Best Regression Model for Customer Spend Based on Demographic Data -- Summary -- Chapter 7: Supervised Learning: Predicting Customer Churn -- Introduction -- Classification Problems -- Understanding Logistic Regression -- Revisiting Linear Regression -- Logistic Regression -- Cost Function for Logistic Regression -- Assumptions of Logistic Regression -- Exercise 7.01: Comparing Predictions by Linear and Logistic Regression on the Shill Bidding Dataset -- Creating a Data Science Pipeline -- Churn Prediction Case Study -- Obtaining the Data -- Exercise 7.02: Obtaining the Data -- Scrubbing the Data | |
505 | 8 | |a Exercise 7.03: Imputing Missing Values -- Exercise 7.04: Renaming Columns and Changing the Data Type -- Exploring the Data -- Exercise 7.05: Obtaining the Statistical Overview and Correlation Plot -- Visualizing the Data -- Exercise 7.06: Performing Exploratory Data Analysis (EDA) -- Activity 7.01: Performing the OSE technique from OSEMN -- Modeling the Data -- Feature Selection -- Exercise 7.07: Performing Feature Selection -- Model Building -- Exercise 7.08: Building a Logistic Regression Model -- Interpreting the Data -- Activity 7.02: Performing the MN technique from OSEMN -- Summary -- Chapter 8: Fine-Tuning Classification Algorithms -- Introduction -- Support Vector Machines -- Intuition behind Maximum Margin -- Linearly Inseparable Cases -- Linearly Inseparable Cases Using the Kernel -- Exercise 8.01: Training an SVM Algorithm Over a Dataset -- Decision Trees -- Exercise 8.02: Implementing a Decision Tree Algorithm over a Dataset -- Important Terminology for Decision Trees -- Decision Tree Algorithm Formulation -- Random Forest -- Exercise 8.03: Implementing a Random Forest Model over a Dataset -- Classical Algorithms - Accuracy Compared -- Activity 8.01: Implementing Different Classification Algorithms -- Preprocessing Data for Machine Learning Models -- Standardization -- Exercise 8.04: Standardizing Data -- Scaling -- Exercise 8.05: Scaling Data After Feature Selection -- Normalization -- Exercise 8.06: Performing Normalization on Data -- Model Evaluation -- Exercise 8.07: Stratified K-fold -- Fine-Tuning of the Model -- Exercise 8.08: Fine-Tuning a Model -- Activity 8.02: Tuning and Optimizing the Model -- Performance Metrics -- Precision -- Recall -- F1 Score -- Exercise 8.09: Evaluating the Performance Metrics for a Model -- ROC Curve -- Exercise 8.10: Plotting the ROC Curve -- Activity 8.03: Comparison of the Models -- Summary | |
505 | 8 | |a Chapter 9: Multiclass Classification Algorithms -- Introduction -- Understanding Multiclass Classification -- Classifiers in Multiclass Classification -- Exercise 9.01: Implementing a Multiclass Classification Algorithm on a Dataset -- Performance Metrics -- Exercise 9.02: Evaluating Performance Using Multiclass Performance Metrics -- Activity 9.01: Performing Multiclass Classification and Evaluating Performance -- Class-Imbalanced Data -- Exercise 9.03: Performing Classification on Imbalanced Data -- Dealing with Class-Imbalanced Data -- Exercise 9.04: Fixing the Imbalance of a Dataset Using SMOTE -- Activity 9.02: Dealing with Imbalanced Data Using scikit-learn -- Summary -- Appendix -- Index | |
520 | 3 | |a This book on marketing analytics with Python will quickly get you up and running using practical data science and machine learning to improve your approach to marketing. You'll learn how to analyze sales, understand customer data, predict outcomes, and present conclusions with clear visualizations | |
650 | 4 | |a Consumer behavior-Data processing | |
650 | 4 | |a Marketing-Data processing | |
653 | 6 | |a Electronic books | |
700 | 1 | |a Govindan, Gururajan |e Sonstige |4 oth | |
700 | 1 | |a Shrimali, Vishwesh Ravi |e Sonstige |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |a Baig, Mirza Rahim |t Data Science for Marketing Analytics |d Birmingham : Packt Publishing, Limited,c2021 |z 9781800560475 |
912 | |a ZDB-30-PQE | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-033788865 | ||
966 | e | |u https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=6723225 |l HWR01 |p ZDB-30-PQE |q HWR_PDA_PQE_Kauf |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1804184319175950336 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Baig, Mirza Rahim |
author_facet | Baig, Mirza Rahim |
author_role | aut |
author_sort | Baig, Mirza Rahim |
author_variant | m r b mr mrb |
building | Verbundindex |
bvnumber | BV048410403 |
collection | ZDB-30-PQE |
contents | Cover -- FM -- Copyright -- Table of Contents -- Preface -- Chapter 1: Data Preparation and Cleaning -- Introduction -- Data Models and Structured Data -- pandas -- Importing and Exporting Data with pandas DataFrames -- Viewing and Inspecting Data in DataFrames -- Exercise 1.01: Loading Data Stored in a JSON File -- Exercise 1.02: Loading Data from Multiple Sources -- Structure of a pandas DataFrame and Series -- Data Manipulation -- Selecting and Filtering in pandas -- Creating DataFrames in Python -- Adding and Removing Attributes and Observations -- Combining Data -- Handling Missing Data -- Exercise 1.03: Combining DataFrames and Handling Missing Values -- Applying Functions and Operations on DataFrames -- Grouping Data -- Exercise 1.04: Applying Data Transformations -- Activity 1.01: Addressing Data Spilling -- Summary -- Chapter 2: Data Exploration and Visualization -- Introduction -- Identifying and Focusing on the Right Attributes -- The groupby( ) Function -- The unique( ) function -- The value_counts( ) function -- Exercise 2.01: Exploring the Attributes in Sales Data -- Fine Tuning Generated Insights -- Selecting and Renaming Attributes -- Reshaping the Data -- Exercise 2.02: Calculating Conversion Ratios for Website Ads. -- Pivot Tables -- Visualizing Data -- Exercise 2.03: Visualizing Data With pandas -- Visualization through Seaborn -- Visualization with Matplotlib -- Activity 2.01: Analyzing Advertisements -- Summary -- Chapter 3: Unsupervised Learning and Customer Segmentation -- Introduction -- Segmentation -- Exercise 3.01: Mall Customer Segmentation - Understanding the Data -- Approaches to Segmentation -- Traditional Segmentation Methods -- Exercise 3.02: Traditional Segmentation of Mall Customers -- Unsupervised Learning (Clustering) for Customer Segmentation -- Choosing Relevant Attributes (Segmentation Criteria) Standardizing Data -- Exercise 3.03: Standardizing Customer Data -- Calculating Distance -- Exercise 3.04: Calculating the Distance between Customers -- K-Means Clustering -- Exercise 3.05: K-Means Clustering on Mall Customers -- Understanding and Describing the Clusters -- Activity 3.01: Bank Customer Segmentation for Loan Campaign -- Clustering with High-Dimensional Data -- Exercise 3.06: Dealing with High-Dimensional Data -- Activity 3.02: Bank Customer Segmentation with Multiple Features -- Summary -- Chapter 4: Evaluating and Choosing the Best Segmentation Approach -- Introduction -- Choosing the Number of Clusters -- Exercise 4.01: Data Staging and Visualization -- Simple Visual Inspection to Choose the Optimal Number of Clusters -- Exercise 4.02: Choosing the Number of Clusters Based on Visual Inspection -- The Elbow Method with Sum of Squared Errors -- Exercise 4.03: Determining the Number of Clusters Using the Elbow Method -- Activity 4.01: Optimizing a Luxury Clothing Brand's Marketing Campaign Using Clustering -- More Clustering Techniques -- Mean-Shift Clustering -- Exercise 4.04: Mean-Shift Clustering on Mall Customers -- Benefits and Drawbacks of the Mean-Shift Technique -- k-modes and k-prototypes Clustering -- Exercise 4.05: Clustering Data Using the k-prototypes Method -- Evaluating Clustering -- Silhouette Score -- Exercise 4.06: Using Silhouette Score to Pick Optimal Number of Clusters -- Train and Test Split -- Exercise 4.07: Using a Train-Test Split to Evaluate Clustering Performance -- Activity 4.02: Evaluating Clustering on Customer Data -- The Role of Business in Cluster Evaluation -- Summary -- Chapter 5: Predicting Customer Revenue Using Linear Regression -- Introduction -- Regression Problems -- Exercise 5.01: Predicting Sales from Advertising Spend Using Linear Regression -- Feature Engineering for Regression Feature Creation -- Data Cleaning -- Exercise 5.02: Creating Features for Customer Revenue Prediction -- Assessing Features Using Visualizations and Correlations -- Exercise 5.03: Examining Relationships between Predictors and the Outcome -- Activity 5.01: Examining the Relationship between Store Location and Revenue -- Performing and Interpreting Linear Regression -- Exercise 5.04: Building a Linear Model Predicting Customer Spend -- Activity 5.02: Predicting Store Revenue Using Linear Regression -- Summary -- Chapter 6: More Tools and Techniques for Evaluating Regression Models -- Introduction -- Evaluating the Accuracy of a Regression Model -- Residuals and Errors -- Mean Absolute Error -- Root Mean Squared Error -- Exercise 6.01: Evaluating Regression Models of Location Revenue Using the MAE and RMSE -- Activity 6.01: Finding Important Variables for Predicting Responses to a Marketing Offer -- Using Recursive Feature Selection for Feature Elimination -- Exercise 6.02: Using RFE for Feature Selection -- Activity 6.02: Using RFE to Choose Features for Predicting Customer Spend -- Tree-Based Regression Models -- Random Forests -- Exercise 6.03: Using Tree-Based Regression Models to Capture Non-Linear Trends -- Activity 6.03: Building the Best Regression Model for Customer Spend Based on Demographic Data -- Summary -- Chapter 7: Supervised Learning: Predicting Customer Churn -- Introduction -- Classification Problems -- Understanding Logistic Regression -- Revisiting Linear Regression -- Logistic Regression -- Cost Function for Logistic Regression -- Assumptions of Logistic Regression -- Exercise 7.01: Comparing Predictions by Linear and Logistic Regression on the Shill Bidding Dataset -- Creating a Data Science Pipeline -- Churn Prediction Case Study -- Obtaining the Data -- Exercise 7.02: Obtaining the Data -- Scrubbing the Data Exercise 7.03: Imputing Missing Values -- Exercise 7.04: Renaming Columns and Changing the Data Type -- Exploring the Data -- Exercise 7.05: Obtaining the Statistical Overview and Correlation Plot -- Visualizing the Data -- Exercise 7.06: Performing Exploratory Data Analysis (EDA) -- Activity 7.01: Performing the OSE technique from OSEMN -- Modeling the Data -- Feature Selection -- Exercise 7.07: Performing Feature Selection -- Model Building -- Exercise 7.08: Building a Logistic Regression Model -- Interpreting the Data -- Activity 7.02: Performing the MN technique from OSEMN -- Summary -- Chapter 8: Fine-Tuning Classification Algorithms -- Introduction -- Support Vector Machines -- Intuition behind Maximum Margin -- Linearly Inseparable Cases -- Linearly Inseparable Cases Using the Kernel -- Exercise 8.01: Training an SVM Algorithm Over a Dataset -- Decision Trees -- Exercise 8.02: Implementing a Decision Tree Algorithm over a Dataset -- Important Terminology for Decision Trees -- Decision Tree Algorithm Formulation -- Random Forest -- Exercise 8.03: Implementing a Random Forest Model over a Dataset -- Classical Algorithms - Accuracy Compared -- Activity 8.01: Implementing Different Classification Algorithms -- Preprocessing Data for Machine Learning Models -- Standardization -- Exercise 8.04: Standardizing Data -- Scaling -- Exercise 8.05: Scaling Data After Feature Selection -- Normalization -- Exercise 8.06: Performing Normalization on Data -- Model Evaluation -- Exercise 8.07: Stratified K-fold -- Fine-Tuning of the Model -- Exercise 8.08: Fine-Tuning a Model -- Activity 8.02: Tuning and Optimizing the Model -- Performance Metrics -- Precision -- Recall -- F1 Score -- Exercise 8.09: Evaluating the Performance Metrics for a Model -- ROC Curve -- Exercise 8.10: Plotting the ROC Curve -- Activity 8.03: Comparison of the Models -- Summary Chapter 9: Multiclass Classification Algorithms -- Introduction -- Understanding Multiclass Classification -- Classifiers in Multiclass Classification -- Exercise 9.01: Implementing a Multiclass Classification Algorithm on a Dataset -- Performance Metrics -- Exercise 9.02: Evaluating Performance Using Multiclass Performance Metrics -- Activity 9.01: Performing Multiclass Classification and Evaluating Performance -- Class-Imbalanced Data -- Exercise 9.03: Performing Classification on Imbalanced Data -- Dealing with Class-Imbalanced Data -- Exercise 9.04: Fixing the Imbalance of a Dataset Using SMOTE -- Activity 9.02: Dealing with Imbalanced Data Using scikit-learn -- Summary -- Appendix -- Index |
ctrlnum | (ZDB-30-PQE)EBC6723225 (ZDB-30-PAD)EBC6723225 (ZDB-89-EBL)EBL6723225 (OCoLC)1268111350 (DE-599)BVBBV048410403 |
edition | 2nd ed |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>10153nmm a2200481 c 4500</leader><controlfield tag="001">BV048410403</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20221025 </controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">220816s2021 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781800563889</subfield><subfield code="q">(electronic bk.)</subfield><subfield code="9">9781800563889</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PQE)EBC6723225</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PAD)EBC6723225</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-89-EBL)EBL6723225</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1268111350</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV048410403</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-2070s</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Baig, Mirza Rahim</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Data Science for Marketing Analytics</subfield><subfield code="b">A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">2nd ed</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham</subfield><subfield code="b">Packt Publishing, Limited</subfield><subfield code="c">2021</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2021</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (637 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Cover -- FM -- Copyright -- Table of Contents -- Preface -- Chapter 1: Data Preparation and Cleaning -- Introduction -- Data Models and Structured Data -- pandas -- Importing and Exporting Data with pandas DataFrames -- Viewing and Inspecting Data in DataFrames -- Exercise 1.01: Loading Data Stored in a JSON File -- Exercise 1.02: Loading Data from Multiple Sources -- Structure of a pandas DataFrame and Series -- Data Manipulation -- Selecting and Filtering in pandas -- Creating DataFrames in Python -- Adding and Removing Attributes and Observations -- Combining Data -- Handling Missing Data -- Exercise 1.03: Combining DataFrames and Handling Missing Values -- Applying Functions and Operations on DataFrames -- Grouping Data -- Exercise 1.04: Applying Data Transformations -- Activity 1.01: Addressing Data Spilling -- Summary -- Chapter 2: Data Exploration and Visualization -- Introduction -- Identifying and Focusing on the Right Attributes -- The groupby( ) Function -- The unique( ) function -- The value_counts( ) function -- Exercise 2.01: Exploring the Attributes in Sales Data -- Fine Tuning Generated Insights -- Selecting and Renaming Attributes -- Reshaping the Data -- Exercise 2.02: Calculating Conversion Ratios for Website Ads. -- Pivot Tables -- Visualizing Data -- Exercise 2.03: Visualizing Data With pandas -- Visualization through Seaborn -- Visualization with Matplotlib -- Activity 2.01: Analyzing Advertisements -- Summary -- Chapter 3: Unsupervised Learning and Customer Segmentation -- Introduction -- Segmentation -- Exercise 3.01: Mall Customer Segmentation - Understanding the Data -- Approaches to Segmentation -- Traditional Segmentation Methods -- Exercise 3.02: Traditional Segmentation of Mall Customers -- Unsupervised Learning (Clustering) for Customer Segmentation -- Choosing Relevant Attributes (Segmentation Criteria)</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Standardizing Data -- Exercise 3.03: Standardizing Customer Data -- Calculating Distance -- Exercise 3.04: Calculating the Distance between Customers -- K-Means Clustering -- Exercise 3.05: K-Means Clustering on Mall Customers -- Understanding and Describing the Clusters -- Activity 3.01: Bank Customer Segmentation for Loan Campaign -- Clustering with High-Dimensional Data -- Exercise 3.06: Dealing with High-Dimensional Data -- Activity 3.02: Bank Customer Segmentation with Multiple Features -- Summary -- Chapter 4: Evaluating and Choosing the Best Segmentation Approach -- Introduction -- Choosing the Number of Clusters -- Exercise 4.01: Data Staging and Visualization -- Simple Visual Inspection to Choose the Optimal Number of Clusters -- Exercise 4.02: Choosing the Number of Clusters Based on Visual Inspection -- The Elbow Method with Sum of Squared Errors -- Exercise 4.03: Determining the Number of Clusters Using the Elbow Method -- Activity 4.01: Optimizing a Luxury Clothing Brand's Marketing Campaign Using Clustering -- More Clustering Techniques -- Mean-Shift Clustering -- Exercise 4.04: Mean-Shift Clustering on Mall Customers -- Benefits and Drawbacks of the Mean-Shift Technique -- k-modes and k-prototypes Clustering -- Exercise 4.05: Clustering Data Using the k-prototypes Method -- Evaluating Clustering -- Silhouette Score -- Exercise 4.06: Using Silhouette Score to Pick Optimal Number of Clusters -- Train and Test Split -- Exercise 4.07: Using a Train-Test Split to Evaluate Clustering Performance -- Activity 4.02: Evaluating Clustering on Customer Data -- The Role of Business in Cluster Evaluation -- Summary -- Chapter 5: Predicting Customer Revenue Using Linear Regression -- Introduction -- Regression Problems -- Exercise 5.01: Predicting Sales from Advertising Spend Using Linear Regression -- Feature Engineering for Regression</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Feature Creation -- Data Cleaning -- Exercise 5.02: Creating Features for Customer Revenue Prediction -- Assessing Features Using Visualizations and Correlations -- Exercise 5.03: Examining Relationships between Predictors and the Outcome -- Activity 5.01: Examining the Relationship between Store Location and Revenue -- Performing and Interpreting Linear Regression -- Exercise 5.04: Building a Linear Model Predicting Customer Spend -- Activity 5.02: Predicting Store Revenue Using Linear Regression -- Summary -- Chapter 6: More Tools and Techniques for Evaluating Regression Models -- Introduction -- Evaluating the Accuracy of a Regression Model -- Residuals and Errors -- Mean Absolute Error -- Root Mean Squared Error -- Exercise 6.01: Evaluating Regression Models of Location Revenue Using the MAE and RMSE -- Activity 6.01: Finding Important Variables for Predicting Responses to a Marketing Offer -- Using Recursive Feature Selection for Feature Elimination -- Exercise 6.02: Using RFE for Feature Selection -- Activity 6.02: Using RFE to Choose Features for Predicting Customer Spend -- Tree-Based Regression Models -- Random Forests -- Exercise 6.03: Using Tree-Based Regression Models to Capture Non-Linear Trends -- Activity 6.03: Building the Best Regression Model for Customer Spend Based on Demographic Data -- Summary -- Chapter 7: Supervised Learning: Predicting Customer Churn -- Introduction -- Classification Problems -- Understanding Logistic Regression -- Revisiting Linear Regression -- Logistic Regression -- Cost Function for Logistic Regression -- Assumptions of Logistic Regression -- Exercise 7.01: Comparing Predictions by Linear and Logistic Regression on the Shill Bidding Dataset -- Creating a Data Science Pipeline -- Churn Prediction Case Study -- Obtaining the Data -- Exercise 7.02: Obtaining the Data -- Scrubbing the Data</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Exercise 7.03: Imputing Missing Values -- Exercise 7.04: Renaming Columns and Changing the Data Type -- Exploring the Data -- Exercise 7.05: Obtaining the Statistical Overview and Correlation Plot -- Visualizing the Data -- Exercise 7.06: Performing Exploratory Data Analysis (EDA) -- Activity 7.01: Performing the OSE technique from OSEMN -- Modeling the Data -- Feature Selection -- Exercise 7.07: Performing Feature Selection -- Model Building -- Exercise 7.08: Building a Logistic Regression Model -- Interpreting the Data -- Activity 7.02: Performing the MN technique from OSEMN -- Summary -- Chapter 8: Fine-Tuning Classification Algorithms -- Introduction -- Support Vector Machines -- Intuition behind Maximum Margin -- Linearly Inseparable Cases -- Linearly Inseparable Cases Using the Kernel -- Exercise 8.01: Training an SVM Algorithm Over a Dataset -- Decision Trees -- Exercise 8.02: Implementing a Decision Tree Algorithm over a Dataset -- Important Terminology for Decision Trees -- Decision Tree Algorithm Formulation -- Random Forest -- Exercise 8.03: Implementing a Random Forest Model over a Dataset -- Classical Algorithms - Accuracy Compared -- Activity 8.01: Implementing Different Classification Algorithms -- Preprocessing Data for Machine Learning Models -- Standardization -- Exercise 8.04: Standardizing Data -- Scaling -- Exercise 8.05: Scaling Data After Feature Selection -- Normalization -- Exercise 8.06: Performing Normalization on Data -- Model Evaluation -- Exercise 8.07: Stratified K-fold -- Fine-Tuning of the Model -- Exercise 8.08: Fine-Tuning a Model -- Activity 8.02: Tuning and Optimizing the Model -- Performance Metrics -- Precision -- Recall -- F1 Score -- Exercise 8.09: Evaluating the Performance Metrics for a Model -- ROC Curve -- Exercise 8.10: Plotting the ROC Curve -- Activity 8.03: Comparison of the Models -- Summary</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Chapter 9: Multiclass Classification Algorithms -- Introduction -- Understanding Multiclass Classification -- Classifiers in Multiclass Classification -- Exercise 9.01: Implementing a Multiclass Classification Algorithm on a Dataset -- Performance Metrics -- Exercise 9.02: Evaluating Performance Using Multiclass Performance Metrics -- Activity 9.01: Performing Multiclass Classification and Evaluating Performance -- Class-Imbalanced Data -- Exercise 9.03: Performing Classification on Imbalanced Data -- Dealing with Class-Imbalanced Data -- Exercise 9.04: Fixing the Imbalance of a Dataset Using SMOTE -- Activity 9.02: Dealing with Imbalanced Data Using scikit-learn -- Summary -- Appendix -- Index</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">This book on marketing analytics with Python will quickly get you up and running using practical data science and machine learning to improve your approach to marketing. You'll learn how to analyze sales, understand customer data, predict outcomes, and present conclusions with clear visualizations</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Consumer behavior-Data processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Marketing-Data processing</subfield></datafield><datafield tag="653" ind1=" " ind2="6"><subfield code="a">Electronic books</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Govindan, Gururajan</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Shrimali, Vishwesh Ravi</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="a">Baig, Mirza Rahim</subfield><subfield code="t">Data Science for Marketing Analytics</subfield><subfield code="d">Birmingham : Packt Publishing, Limited,c2021</subfield><subfield code="z">9781800560475</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033788865</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=6723225</subfield><subfield code="l">HWR01</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">HWR_PDA_PQE_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV048410403 |
illustrated | Not Illustrated |
index_date | 2024-07-03T20:24:42Z |
indexdate | 2024-07-10T09:37:27Z |
institution | BVB |
isbn | 9781800563889 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033788865 |
oclc_num | 1268111350 |
open_access_boolean | |
owner | DE-2070s |
owner_facet | DE-2070s |
physical | 1 Online-Ressource (637 Seiten) |
psigel | ZDB-30-PQE ZDB-30-PQE HWR_PDA_PQE_Kauf |
publishDate | 2021 |
publishDateSearch | 2021 |
publishDateSort | 2021 |
publisher | Packt Publishing, Limited |
record_format | marc |
spelling | Baig, Mirza Rahim Verfasser aut Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python 2nd ed Birmingham Packt Publishing, Limited 2021 ©2021 1 Online-Ressource (637 Seiten) txt rdacontent c rdamedia cr rdacarrier Cover -- FM -- Copyright -- Table of Contents -- Preface -- Chapter 1: Data Preparation and Cleaning -- Introduction -- Data Models and Structured Data -- pandas -- Importing and Exporting Data with pandas DataFrames -- Viewing and Inspecting Data in DataFrames -- Exercise 1.01: Loading Data Stored in a JSON File -- Exercise 1.02: Loading Data from Multiple Sources -- Structure of a pandas DataFrame and Series -- Data Manipulation -- Selecting and Filtering in pandas -- Creating DataFrames in Python -- Adding and Removing Attributes and Observations -- Combining Data -- Handling Missing Data -- Exercise 1.03: Combining DataFrames and Handling Missing Values -- Applying Functions and Operations on DataFrames -- Grouping Data -- Exercise 1.04: Applying Data Transformations -- Activity 1.01: Addressing Data Spilling -- Summary -- Chapter 2: Data Exploration and Visualization -- Introduction -- Identifying and Focusing on the Right Attributes -- The groupby( ) Function -- The unique( ) function -- The value_counts( ) function -- Exercise 2.01: Exploring the Attributes in Sales Data -- Fine Tuning Generated Insights -- Selecting and Renaming Attributes -- Reshaping the Data -- Exercise 2.02: Calculating Conversion Ratios for Website Ads. -- Pivot Tables -- Visualizing Data -- Exercise 2.03: Visualizing Data With pandas -- Visualization through Seaborn -- Visualization with Matplotlib -- Activity 2.01: Analyzing Advertisements -- Summary -- Chapter 3: Unsupervised Learning and Customer Segmentation -- Introduction -- Segmentation -- Exercise 3.01: Mall Customer Segmentation - Understanding the Data -- Approaches to Segmentation -- Traditional Segmentation Methods -- Exercise 3.02: Traditional Segmentation of Mall Customers -- Unsupervised Learning (Clustering) for Customer Segmentation -- Choosing Relevant Attributes (Segmentation Criteria) Standardizing Data -- Exercise 3.03: Standardizing Customer Data -- Calculating Distance -- Exercise 3.04: Calculating the Distance between Customers -- K-Means Clustering -- Exercise 3.05: K-Means Clustering on Mall Customers -- Understanding and Describing the Clusters -- Activity 3.01: Bank Customer Segmentation for Loan Campaign -- Clustering with High-Dimensional Data -- Exercise 3.06: Dealing with High-Dimensional Data -- Activity 3.02: Bank Customer Segmentation with Multiple Features -- Summary -- Chapter 4: Evaluating and Choosing the Best Segmentation Approach -- Introduction -- Choosing the Number of Clusters -- Exercise 4.01: Data Staging and Visualization -- Simple Visual Inspection to Choose the Optimal Number of Clusters -- Exercise 4.02: Choosing the Number of Clusters Based on Visual Inspection -- The Elbow Method with Sum of Squared Errors -- Exercise 4.03: Determining the Number of Clusters Using the Elbow Method -- Activity 4.01: Optimizing a Luxury Clothing Brand's Marketing Campaign Using Clustering -- More Clustering Techniques -- Mean-Shift Clustering -- Exercise 4.04: Mean-Shift Clustering on Mall Customers -- Benefits and Drawbacks of the Mean-Shift Technique -- k-modes and k-prototypes Clustering -- Exercise 4.05: Clustering Data Using the k-prototypes Method -- Evaluating Clustering -- Silhouette Score -- Exercise 4.06: Using Silhouette Score to Pick Optimal Number of Clusters -- Train and Test Split -- Exercise 4.07: Using a Train-Test Split to Evaluate Clustering Performance -- Activity 4.02: Evaluating Clustering on Customer Data -- The Role of Business in Cluster Evaluation -- Summary -- Chapter 5: Predicting Customer Revenue Using Linear Regression -- Introduction -- Regression Problems -- Exercise 5.01: Predicting Sales from Advertising Spend Using Linear Regression -- Feature Engineering for Regression Feature Creation -- Data Cleaning -- Exercise 5.02: Creating Features for Customer Revenue Prediction -- Assessing Features Using Visualizations and Correlations -- Exercise 5.03: Examining Relationships between Predictors and the Outcome -- Activity 5.01: Examining the Relationship between Store Location and Revenue -- Performing and Interpreting Linear Regression -- Exercise 5.04: Building a Linear Model Predicting Customer Spend -- Activity 5.02: Predicting Store Revenue Using Linear Regression -- Summary -- Chapter 6: More Tools and Techniques for Evaluating Regression Models -- Introduction -- Evaluating the Accuracy of a Regression Model -- Residuals and Errors -- Mean Absolute Error -- Root Mean Squared Error -- Exercise 6.01: Evaluating Regression Models of Location Revenue Using the MAE and RMSE -- Activity 6.01: Finding Important Variables for Predicting Responses to a Marketing Offer -- Using Recursive Feature Selection for Feature Elimination -- Exercise 6.02: Using RFE for Feature Selection -- Activity 6.02: Using RFE to Choose Features for Predicting Customer Spend -- Tree-Based Regression Models -- Random Forests -- Exercise 6.03: Using Tree-Based Regression Models to Capture Non-Linear Trends -- Activity 6.03: Building the Best Regression Model for Customer Spend Based on Demographic Data -- Summary -- Chapter 7: Supervised Learning: Predicting Customer Churn -- Introduction -- Classification Problems -- Understanding Logistic Regression -- Revisiting Linear Regression -- Logistic Regression -- Cost Function for Logistic Regression -- Assumptions of Logistic Regression -- Exercise 7.01: Comparing Predictions by Linear and Logistic Regression on the Shill Bidding Dataset -- Creating a Data Science Pipeline -- Churn Prediction Case Study -- Obtaining the Data -- Exercise 7.02: Obtaining the Data -- Scrubbing the Data Exercise 7.03: Imputing Missing Values -- Exercise 7.04: Renaming Columns and Changing the Data Type -- Exploring the Data -- Exercise 7.05: Obtaining the Statistical Overview and Correlation Plot -- Visualizing the Data -- Exercise 7.06: Performing Exploratory Data Analysis (EDA) -- Activity 7.01: Performing the OSE technique from OSEMN -- Modeling the Data -- Feature Selection -- Exercise 7.07: Performing Feature Selection -- Model Building -- Exercise 7.08: Building a Logistic Regression Model -- Interpreting the Data -- Activity 7.02: Performing the MN technique from OSEMN -- Summary -- Chapter 8: Fine-Tuning Classification Algorithms -- Introduction -- Support Vector Machines -- Intuition behind Maximum Margin -- Linearly Inseparable Cases -- Linearly Inseparable Cases Using the Kernel -- Exercise 8.01: Training an SVM Algorithm Over a Dataset -- Decision Trees -- Exercise 8.02: Implementing a Decision Tree Algorithm over a Dataset -- Important Terminology for Decision Trees -- Decision Tree Algorithm Formulation -- Random Forest -- Exercise 8.03: Implementing a Random Forest Model over a Dataset -- Classical Algorithms - Accuracy Compared -- Activity 8.01: Implementing Different Classification Algorithms -- Preprocessing Data for Machine Learning Models -- Standardization -- Exercise 8.04: Standardizing Data -- Scaling -- Exercise 8.05: Scaling Data After Feature Selection -- Normalization -- Exercise 8.06: Performing Normalization on Data -- Model Evaluation -- Exercise 8.07: Stratified K-fold -- Fine-Tuning of the Model -- Exercise 8.08: Fine-Tuning a Model -- Activity 8.02: Tuning and Optimizing the Model -- Performance Metrics -- Precision -- Recall -- F1 Score -- Exercise 8.09: Evaluating the Performance Metrics for a Model -- ROC Curve -- Exercise 8.10: Plotting the ROC Curve -- Activity 8.03: Comparison of the Models -- Summary Chapter 9: Multiclass Classification Algorithms -- Introduction -- Understanding Multiclass Classification -- Classifiers in Multiclass Classification -- Exercise 9.01: Implementing a Multiclass Classification Algorithm on a Dataset -- Performance Metrics -- Exercise 9.02: Evaluating Performance Using Multiclass Performance Metrics -- Activity 9.01: Performing Multiclass Classification and Evaluating Performance -- Class-Imbalanced Data -- Exercise 9.03: Performing Classification on Imbalanced Data -- Dealing with Class-Imbalanced Data -- Exercise 9.04: Fixing the Imbalance of a Dataset Using SMOTE -- Activity 9.02: Dealing with Imbalanced Data Using scikit-learn -- Summary -- Appendix -- Index This book on marketing analytics with Python will quickly get you up and running using practical data science and machine learning to improve your approach to marketing. You'll learn how to analyze sales, understand customer data, predict outcomes, and present conclusions with clear visualizations Consumer behavior-Data processing Marketing-Data processing Electronic books Govindan, Gururajan Sonstige oth Shrimali, Vishwesh Ravi Sonstige oth Erscheint auch als Druck-Ausgabe Baig, Mirza Rahim Data Science for Marketing Analytics Birmingham : Packt Publishing, Limited,c2021 9781800560475 |
spellingShingle | Baig, Mirza Rahim Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python Cover -- FM -- Copyright -- Table of Contents -- Preface -- Chapter 1: Data Preparation and Cleaning -- Introduction -- Data Models and Structured Data -- pandas -- Importing and Exporting Data with pandas DataFrames -- Viewing and Inspecting Data in DataFrames -- Exercise 1.01: Loading Data Stored in a JSON File -- Exercise 1.02: Loading Data from Multiple Sources -- Structure of a pandas DataFrame and Series -- Data Manipulation -- Selecting and Filtering in pandas -- Creating DataFrames in Python -- Adding and Removing Attributes and Observations -- Combining Data -- Handling Missing Data -- Exercise 1.03: Combining DataFrames and Handling Missing Values -- Applying Functions and Operations on DataFrames -- Grouping Data -- Exercise 1.04: Applying Data Transformations -- Activity 1.01: Addressing Data Spilling -- Summary -- Chapter 2: Data Exploration and Visualization -- Introduction -- Identifying and Focusing on the Right Attributes -- The groupby( ) Function -- The unique( ) function -- The value_counts( ) function -- Exercise 2.01: Exploring the Attributes in Sales Data -- Fine Tuning Generated Insights -- Selecting and Renaming Attributes -- Reshaping the Data -- Exercise 2.02: Calculating Conversion Ratios for Website Ads. -- Pivot Tables -- Visualizing Data -- Exercise 2.03: Visualizing Data With pandas -- Visualization through Seaborn -- Visualization with Matplotlib -- Activity 2.01: Analyzing Advertisements -- Summary -- Chapter 3: Unsupervised Learning and Customer Segmentation -- Introduction -- Segmentation -- Exercise 3.01: Mall Customer Segmentation - Understanding the Data -- Approaches to Segmentation -- Traditional Segmentation Methods -- Exercise 3.02: Traditional Segmentation of Mall Customers -- Unsupervised Learning (Clustering) for Customer Segmentation -- Choosing Relevant Attributes (Segmentation Criteria) Standardizing Data -- Exercise 3.03: Standardizing Customer Data -- Calculating Distance -- Exercise 3.04: Calculating the Distance between Customers -- K-Means Clustering -- Exercise 3.05: K-Means Clustering on Mall Customers -- Understanding and Describing the Clusters -- Activity 3.01: Bank Customer Segmentation for Loan Campaign -- Clustering with High-Dimensional Data -- Exercise 3.06: Dealing with High-Dimensional Data -- Activity 3.02: Bank Customer Segmentation with Multiple Features -- Summary -- Chapter 4: Evaluating and Choosing the Best Segmentation Approach -- Introduction -- Choosing the Number of Clusters -- Exercise 4.01: Data Staging and Visualization -- Simple Visual Inspection to Choose the Optimal Number of Clusters -- Exercise 4.02: Choosing the Number of Clusters Based on Visual Inspection -- The Elbow Method with Sum of Squared Errors -- Exercise 4.03: Determining the Number of Clusters Using the Elbow Method -- Activity 4.01: Optimizing a Luxury Clothing Brand's Marketing Campaign Using Clustering -- More Clustering Techniques -- Mean-Shift Clustering -- Exercise 4.04: Mean-Shift Clustering on Mall Customers -- Benefits and Drawbacks of the Mean-Shift Technique -- k-modes and k-prototypes Clustering -- Exercise 4.05: Clustering Data Using the k-prototypes Method -- Evaluating Clustering -- Silhouette Score -- Exercise 4.06: Using Silhouette Score to Pick Optimal Number of Clusters -- Train and Test Split -- Exercise 4.07: Using a Train-Test Split to Evaluate Clustering Performance -- Activity 4.02: Evaluating Clustering on Customer Data -- The Role of Business in Cluster Evaluation -- Summary -- Chapter 5: Predicting Customer Revenue Using Linear Regression -- Introduction -- Regression Problems -- Exercise 5.01: Predicting Sales from Advertising Spend Using Linear Regression -- Feature Engineering for Regression Feature Creation -- Data Cleaning -- Exercise 5.02: Creating Features for Customer Revenue Prediction -- Assessing Features Using Visualizations and Correlations -- Exercise 5.03: Examining Relationships between Predictors and the Outcome -- Activity 5.01: Examining the Relationship between Store Location and Revenue -- Performing and Interpreting Linear Regression -- Exercise 5.04: Building a Linear Model Predicting Customer Spend -- Activity 5.02: Predicting Store Revenue Using Linear Regression -- Summary -- Chapter 6: More Tools and Techniques for Evaluating Regression Models -- Introduction -- Evaluating the Accuracy of a Regression Model -- Residuals and Errors -- Mean Absolute Error -- Root Mean Squared Error -- Exercise 6.01: Evaluating Regression Models of Location Revenue Using the MAE and RMSE -- Activity 6.01: Finding Important Variables for Predicting Responses to a Marketing Offer -- Using Recursive Feature Selection for Feature Elimination -- Exercise 6.02: Using RFE for Feature Selection -- Activity 6.02: Using RFE to Choose Features for Predicting Customer Spend -- Tree-Based Regression Models -- Random Forests -- Exercise 6.03: Using Tree-Based Regression Models to Capture Non-Linear Trends -- Activity 6.03: Building the Best Regression Model for Customer Spend Based on Demographic Data -- Summary -- Chapter 7: Supervised Learning: Predicting Customer Churn -- Introduction -- Classification Problems -- Understanding Logistic Regression -- Revisiting Linear Regression -- Logistic Regression -- Cost Function for Logistic Regression -- Assumptions of Logistic Regression -- Exercise 7.01: Comparing Predictions by Linear and Logistic Regression on the Shill Bidding Dataset -- Creating a Data Science Pipeline -- Churn Prediction Case Study -- Obtaining the Data -- Exercise 7.02: Obtaining the Data -- Scrubbing the Data Exercise 7.03: Imputing Missing Values -- Exercise 7.04: Renaming Columns and Changing the Data Type -- Exploring the Data -- Exercise 7.05: Obtaining the Statistical Overview and Correlation Plot -- Visualizing the Data -- Exercise 7.06: Performing Exploratory Data Analysis (EDA) -- Activity 7.01: Performing the OSE technique from OSEMN -- Modeling the Data -- Feature Selection -- Exercise 7.07: Performing Feature Selection -- Model Building -- Exercise 7.08: Building a Logistic Regression Model -- Interpreting the Data -- Activity 7.02: Performing the MN technique from OSEMN -- Summary -- Chapter 8: Fine-Tuning Classification Algorithms -- Introduction -- Support Vector Machines -- Intuition behind Maximum Margin -- Linearly Inseparable Cases -- Linearly Inseparable Cases Using the Kernel -- Exercise 8.01: Training an SVM Algorithm Over a Dataset -- Decision Trees -- Exercise 8.02: Implementing a Decision Tree Algorithm over a Dataset -- Important Terminology for Decision Trees -- Decision Tree Algorithm Formulation -- Random Forest -- Exercise 8.03: Implementing a Random Forest Model over a Dataset -- Classical Algorithms - Accuracy Compared -- Activity 8.01: Implementing Different Classification Algorithms -- Preprocessing Data for Machine Learning Models -- Standardization -- Exercise 8.04: Standardizing Data -- Scaling -- Exercise 8.05: Scaling Data After Feature Selection -- Normalization -- Exercise 8.06: Performing Normalization on Data -- Model Evaluation -- Exercise 8.07: Stratified K-fold -- Fine-Tuning of the Model -- Exercise 8.08: Fine-Tuning a Model -- Activity 8.02: Tuning and Optimizing the Model -- Performance Metrics -- Precision -- Recall -- F1 Score -- Exercise 8.09: Evaluating the Performance Metrics for a Model -- ROC Curve -- Exercise 8.10: Plotting the ROC Curve -- Activity 8.03: Comparison of the Models -- Summary Chapter 9: Multiclass Classification Algorithms -- Introduction -- Understanding Multiclass Classification -- Classifiers in Multiclass Classification -- Exercise 9.01: Implementing a Multiclass Classification Algorithm on a Dataset -- Performance Metrics -- Exercise 9.02: Evaluating Performance Using Multiclass Performance Metrics -- Activity 9.01: Performing Multiclass Classification and Evaluating Performance -- Class-Imbalanced Data -- Exercise 9.03: Performing Classification on Imbalanced Data -- Dealing with Class-Imbalanced Data -- Exercise 9.04: Fixing the Imbalance of a Dataset Using SMOTE -- Activity 9.02: Dealing with Imbalanced Data Using scikit-learn -- Summary -- Appendix -- Index Consumer behavior-Data processing Marketing-Data processing |
title | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_auth | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_exact_search | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_exact_search_txtP | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_full | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_fullStr | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_full_unstemmed | Data Science for Marketing Analytics A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
title_short | Data Science for Marketing Analytics |
title_sort | data science for marketing analytics a practical guide to forming a killer marketing strategy through data analysis with python |
title_sub | A Practical Guide to Forming a Killer Marketing Strategy Through Data Analysis with Python |
topic | Consumer behavior-Data processing Marketing-Data processing |
topic_facet | Consumer behavior-Data processing Marketing-Data processing |
work_keys_str_mv | AT baigmirzarahim datascienceformarketinganalyticsapracticalguidetoformingakillermarketingstrategythroughdataanalysiswithpython AT govindangururajan datascienceformarketinganalyticsapracticalguidetoformingakillermarketingstrategythroughdataanalysiswithpython AT shrimalivishweshravi datascienceformarketinganalyticsapracticalguidetoformingakillermarketingstrategythroughdataanalysiswithpython |