Learning Spark SQL :: architect streaming analytics and machine learning solutions /
Design, implement, and deliver successful streaming applications, machine learning pipelines and graph applications using Spark SQL API About This Book Learn about the design and implementation of streaming applications, machine learning pipelines, deep learning, and large-scale graph processing app...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham, UK :
Packt Publishing,
2017.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Design, implement, and deliver successful streaming applications, machine learning pipelines and graph applications using Spark SQL API About This Book Learn about the design and implementation of streaming applications, machine learning pipelines, deep learning, and large-scale graph processing applications using Spark SQL APIs and Scala. Learn data exploration, data munging, and how to process structured and semi-structured data using real-world datasets and gain hands-on exposure to the issues and challenges of working with noisy and "dirty" real-world data. Understand design considerations for scalability and performance in web-scale Spark application architectures. Who This Book Is For If you are a developer, engineer, or an architect and want to learn how to use Apache Spark in a web-scale project, then this is the book for you. It is assumed that you have prior knowledge of SQL querying. A basic programming knowledge with Scala, Java, R, or Python is all you need to get started with this book. What You Will Learn Familiarize yourself with Spark SQL programming, including working with DataFrame/Dataset API and SQL Perform a series of hands-on exercises with different types of data sources, including CSV, JSON, Avro, MySQL, and MongoDB Perform data quality checks, data visualization, and basic statistical analysis tasks Perform data munging tasks on publically available datasets Learn how to use Spark SQL and Apache Kafka to build streaming applications Learn key performance-tuning tips and tricks in Spark SQL applications Learn key architectural components and patterns in large-scale Spark SQL applications In Detail In the past year, Apache Spark has been increasingly adopted for the development of distributed applications. Spark SQL APIs provide an optimized interface that helps developers build such applications quickly and easily. However, designing web-scale production applications using Spark SQL APIs can be a complex task. Hence, understanding the design and implementation best practices before you start your project will help you avoid these problems. This book gives an insight into the engineering practices used to design and build real-world, Spark-based applications. The book's hands-on examples will give you the required confidence to work on any future projects you encounter in Spark SQL. It starts by familiarizing you with data exploration and data munging tasks using Spark SQL and Scala. Extensive code examples will help yo ... |
Beschreibung: | Includes index. |
Beschreibung: | 1 online resource (1 volume) : illustrations |
ISBN: | 9781785887352 1785887351 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-on1005351391 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr unu|||||||| | ||
008 | 171005s2017 enka o 001 0 eng d | ||
040 | |a UMI |b eng |e rda |e pn |c UMI |d STF |d N$T |d IDEBK |d OCLCF |d CEF |d KSU |d UAB |d K6U |d QGK |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d OCLCL |d DXU | ||
020 | |a 9781785887352 |q (electronic bk.) | ||
020 | |a 1785887351 |q (electronic bk.) | ||
020 | |z 9781785888359 | ||
035 | |a (OCoLC)1005351391 | ||
037 | |a CL0500000899 |b Safari Books Online | ||
050 | 4 | |a QA76.9.D343 | |
072 | 7 | |a COM |x 000000 |2 bisacsh | |
082 | 7 | |a 006.312 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Sarkar, Aurobindo, |e author. |0 http://id.loc.gov/authorities/names/no2016019533 | |
245 | 1 | 0 | |a Learning Spark SQL : |b architect streaming analytics and machine learning solutions / |c Aurobindo Sarkar. |
264 | 1 | |a Birmingham, UK : |b Packt Publishing, |c 2017. | |
300 | |a 1 online resource (1 volume) : |b illustrations | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
588 | 0 | |a Online resource; title from title page (viewed October 3, 2017). | |
500 | |a Includes index. | ||
520 | |a Design, implement, and deliver successful streaming applications, machine learning pipelines and graph applications using Spark SQL API About This Book Learn about the design and implementation of streaming applications, machine learning pipelines, deep learning, and large-scale graph processing applications using Spark SQL APIs and Scala. Learn data exploration, data munging, and how to process structured and semi-structured data using real-world datasets and gain hands-on exposure to the issues and challenges of working with noisy and "dirty" real-world data. Understand design considerations for scalability and performance in web-scale Spark application architectures. Who This Book Is For If you are a developer, engineer, or an architect and want to learn how to use Apache Spark in a web-scale project, then this is the book for you. It is assumed that you have prior knowledge of SQL querying. A basic programming knowledge with Scala, Java, R, or Python is all you need to get started with this book. What You Will Learn Familiarize yourself with Spark SQL programming, including working with DataFrame/Dataset API and SQL Perform a series of hands-on exercises with different types of data sources, including CSV, JSON, Avro, MySQL, and MongoDB Perform data quality checks, data visualization, and basic statistical analysis tasks Perform data munging tasks on publically available datasets Learn how to use Spark SQL and Apache Kafka to build streaming applications Learn key performance-tuning tips and tricks in Spark SQL applications Learn key architectural components and patterns in large-scale Spark SQL applications In Detail In the past year, Apache Spark has been increasingly adopted for the development of distributed applications. Spark SQL APIs provide an optimized interface that helps developers build such applications quickly and easily. However, designing web-scale production applications using Spark SQL APIs can be a complex task. Hence, understanding the design and implementation best practices before you start your project will help you avoid these problems. This book gives an insight into the engineering practices used to design and build real-world, Spark-based applications. The book's hands-on examples will give you the required confidence to work on any future projects you encounter in Spark SQL. It starts by familiarizing you with data exploration and data munging tasks using Spark SQL and Scala. Extensive code examples will help yo ... | ||
505 | 0 | |a Cover -- Title Page -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Getting Started with Spark SQL -- What is Spark SQL? -- Introducing SparkSession -- Understanding Spark SQL concepts -- Understanding Resilient Distributed Datasets (RDDs) -- Understanding DataFrames and Datasets -- Understanding the Catalyst optimizer -- Understanding Catalyst optimizations -- Understanding Catalyst transformations -- Introducing Project Tungsten -- Using Spark SQL in streaming applications -- Understanding Structured Streaming internals -- Summary -- Chapter 2: Using Spark SQL for Processing Structured and Semistructured Data -- Understanding data sources in Spark applications -- Selecting Spark data sources -- Using Spark with relational databases -- Using Spark with MongoDB (NoSQL database) -- Using Spark with JSON data -- Using Spark with Avro files -- Using Spark with Parquet files -- Defining and using custom data sources in Spark -- Summary -- Chapter 3: Using Spark SQL for Data Exploration -- Introducing Exploratory Data Analysis (EDA) -- Using Spark SQL for basic data analysis -- Identifying missing data -- Computing basic statistics -- Identifying data outliers -- Visualizing data with Apache Zeppelin -- Sampling data with Spark SQL APIs -- Sampling with the DataFrame/Dataset API -- Sampling with the RDD API -- Using Spark SQL for creating pivot tables -- Summary -- Chapter 4: Using Spark SQL for Data Munging -- Introducing data munging -- Exploring data munging techniques -- Pre-processing of theamp -- #160 -- household electric consumption Dataset -- Computing basic statistics and aggregations -- Augmenting the Dataset -- Executing other miscellaneous processing steps -- Pre-processing ofamp -- #160 -- the weather Dataset. | |
505 | 8 | |a Analyzing missing data -- Combining data using a JOIN operation -- Munging textual data -- Processing multiple input data files -- Removing stop words -- Munging time series data -- Pre-processing of theamp -- #160 -- time-series Dataset -- Processing date fields -- Persisting and loading data -- Defining a date-time index -- Using theamp -- #160 -- amp -- #160 -- TimeSeriesRDDamp -- #160 -- object -- Handling missing time-series data -- Computing basic statistics -- Dealing with variable length records -- Converting variable-length records to fixed-length records -- Extracting data from "messy" columns -- Preparing data for machine learning -- Pre-processing data for machine learning -- Creating and running a machine learning pipeline -- Summary -- Chapter 5: Using Spark SQL in Streaming Applications -- Introducing streaming data applications -- Building Spark streaming applications -- Implementing sliding window-based functionality -- Joining a streaming Dataset with a static Dataset -- Using the Dataset API in Structured Streaming -- Using output sinks -- Using the Foreach Sink for arbitrary computations on output -- Using the Memory Sink to save output to a table -- Using the File Sink to save output to a partitioned table -- Monitoring streaming queries -- Using Kafka with Spark Structured Streaming -- Introducing Kafka concepts -- Introducing ZooKeeper concepts -- Introducing Kafka-Spark integration -- Introducing Kafka-Spark Structured Streaming -- Writing a receiver for a custom data source -- Summary -- Chapter 6: Using Spark SQL in Machine Learning Applications -- Introducing machine learning applications -- Understanding Spark ML pipelines and their components -- Understanding the steps in a pipeline application development process -- Introducing feature engineering -- Creating new features from raw data. | |
505 | 8 | |a Estimating the importance of a feature -- Understanding dimensionality reduction -- Deriving good features -- Implementing a Spark ML classification model -- Exploring the diabetes Dataset -- Pre-processing the data -- Building the Spark ML pipeline -- Using StringIndexer for indexing categorical features and labels -- Using VectorAssembler for assembling features into one column -- Using a Spark ML classifier -- Creating a Spark ML pipeline -- Creating the training and test Datasets -- Making predictions using the PipelineModel -- Selecting the best model -- Changing the ML algorithm in the pipeline -- Introducing Spark ML tools and utilities -- Using Principal Component Analysis to select features -- Using encoders -- Using Bucketizer -- Using VectorSlicer -- Using Chi-squared selector -- Using a Normalizer -- Retrieving our original labels -- Implementing a Spark ML clustering model -- Summary -- Chapter 7: Using Spark SQL in Graph Applications -- Introducing large-scale graph applications -- Exploring graphs using GraphFrames -- Constructing a GraphFrame -- Basic graph queries and operations -- Motif analysis using GraphFrames -- Processing subgraphs -- Applying graph algorithms -- Saving and loading GraphFrames -- Analyzing JSON input modeled as a graphamp -- #160 -- Processing graphs containing multiple types of relationships -- Understanding GraphFrame internals -- Viewing GraphFrame physical execution plan -- Understanding partitioning in GraphFrames -- Summary -- Chapter 8: Using Spark SQL with SparkR -- Introducing SparkR -- Understanding the SparkR architecture -- Understanding SparkR DataFrames -- Using SparkR for EDA and data munging tasks -- Reading and writing Spark DataFrames -- Exploring structure and contents of Spark DataFrames -- Running basic operations on Spark DataFrames -- Executing SQL statements on Spark DataFrames. | |
505 | 8 | |a Merging SparkR DataFrames -- Using User Defined Functions (UDFs) -- Using SparkR for computing summary statistics -- Using SparkR for data visualization -- Visualizing data on a map -- Visualizing graph nodes and edges -- Using SparkR for machine learning -- Summary -- Chapter 9: Developing Applications with Spark SQL -- Introducing Spark SQL applications -- Understanding text analysis applications -- Using Spark SQL for textual analysis -- Preprocessing textual data -- Computing readability -- Using word lists -- Creating data preprocessing pipelines -- Understanding themes in document corpuses -- Using Naive Bayes classifiers -- Developing a machine learning application -- Summary -- Chapter 10: Using Spark SQL in Deep Learning Applications -- Introducing neural networks -- Understanding deep learning -- Understanding representation learning -- Understanding stochastic gradient descent -- Introducing deep learning in Spark -- Introducing CaffeOnSpark -- Introducing DL4J -- Introducing TensorFrames -- Working with BigDL -- Tuning hyperparameters of deep learning models -- Introducing deep learning pipelines -- Understanding Supervised learning -- Understanding convolutional neural networks -- Using neural networks for text classification -- Using deep neural networks for language processing -- Understanding Recurrent Neural Networks -- Introducing autoencoders -- Summary -- Chapter 11: Tuning Spark SQL Components for Performance -- Introducing performance tuning in Spark SQL -- Understanding DataFrame/Dataset APIs -- Optimizing data serialization -- Understanding Catalyst optimizations -- Understanding the Dataset/DataFrame API -- Understanding Catalyst transformations -- Visualizing Spark application execution -- Exploring Spark application execution metrics -- Using external tools for performance tuning -- Cost-based optimizer in Apache Spark 2.2. | |
505 | 8 | |a Understanding theamp -- #160 -- CBO statistics collection -- Statistics collection functions -- Filter operator -- Join operator -- Build side selection -- Understanding multi-way JOIN ordering optimization -- Understanding performance improvements using whole-stage code generation -- Summary -- Chapter 12: Spark SQL in Large-Scale Application Architectures -- Understanding Spark-based application architectures -- Using Apache Spark for batch processing -- Using Apache Spark for stream processing -- Understanding the Lambda architecture -- Understanding the Kappa Architecture -- Design considerations for building scalable stream processing applications -- Building robust ETL pipelines using Spark SQL -- Choosing appropriate data formats -- Transforming data in ETL pipelines -- Addressing errors in ETL pipelines -- Implementing a scalable monitoring solution -- Deploying Spark machine learning pipelines -- Understanding the challenges in typical ML deployment environments -- Understanding types of model scoring architectures -- Using cluster managers -- Summary -- Index. | |
630 | 0 | 0 | |a Spark (Electronic resource : Apache Software Foundation) |0 http://id.loc.gov/authorities/names/no2015027445 |
630 | 0 | 7 | |a Spark (Electronic resource : Apache Software Foundation) |2 fast |
650 | 0 | |a Data mining. |0 http://id.loc.gov/authorities/subjects/sh97002073 | |
650 | 0 | |a Big data. |0 http://id.loc.gov/authorities/subjects/sh2012003227 | |
650 | 0 | |a Application software |x Development. |0 http://id.loc.gov/authorities/subjects/sh95009362 | |
650 | 2 | |a Data Mining |0 https://id.nlm.nih.gov/mesh/D057225 | |
650 | 6 | |a Exploration de données (Informatique) | |
650 | 6 | |a Données volumineuses. | |
650 | 6 | |a Logiciels d'application |x Développement. | |
650 | 7 | |a COMPUTERS |x General. |2 bisacsh | |
650 | 7 | |a Application software |x Development |2 fast | |
650 | 7 | |a Big data |2 fast | |
650 | 7 | |a Data mining |2 fast | |
758 | |i has work: |a Learning Spark SQL (Text) |1 https://id.oclc.org/worldcat/entity/E39PCFxPWPxDQk4hjkjBRr6V83 |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1592147 |3 Volltext |
938 | |a EBSCOhost |b EBSC |n 1592147 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis38868004 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-on1005351391 |
---|---|
_version_ | 1816882401965506561 |
adam_text | |
any_adam_object | |
author | Sarkar, Aurobindo |
author_GND | http://id.loc.gov/authorities/names/no2016019533 |
author_facet | Sarkar, Aurobindo |
author_role | aut |
author_sort | Sarkar, Aurobindo |
author_variant | a s as |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.9.D343 |
callnumber-search | QA76.9.D343 |
callnumber-sort | QA 276.9 D343 |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBA |
contents | Cover -- Title Page -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Getting Started with Spark SQL -- What is Spark SQL? -- Introducing SparkSession -- Understanding Spark SQL concepts -- Understanding Resilient Distributed Datasets (RDDs) -- Understanding DataFrames and Datasets -- Understanding the Catalyst optimizer -- Understanding Catalyst optimizations -- Understanding Catalyst transformations -- Introducing Project Tungsten -- Using Spark SQL in streaming applications -- Understanding Structured Streaming internals -- Summary -- Chapter 2: Using Spark SQL for Processing Structured and Semistructured Data -- Understanding data sources in Spark applications -- Selecting Spark data sources -- Using Spark with relational databases -- Using Spark with MongoDB (NoSQL database) -- Using Spark with JSON data -- Using Spark with Avro files -- Using Spark with Parquet files -- Defining and using custom data sources in Spark -- Summary -- Chapter 3: Using Spark SQL for Data Exploration -- Introducing Exploratory Data Analysis (EDA) -- Using Spark SQL for basic data analysis -- Identifying missing data -- Computing basic statistics -- Identifying data outliers -- Visualizing data with Apache Zeppelin -- Sampling data with Spark SQL APIs -- Sampling with the DataFrame/Dataset API -- Sampling with the RDD API -- Using Spark SQL for creating pivot tables -- Summary -- Chapter 4: Using Spark SQL for Data Munging -- Introducing data munging -- Exploring data munging techniques -- Pre-processing of theamp -- #160 -- household electric consumption Dataset -- Computing basic statistics and aggregations -- Augmenting the Dataset -- Executing other miscellaneous processing steps -- Pre-processing ofamp -- #160 -- the weather Dataset. Analyzing missing data -- Combining data using a JOIN operation -- Munging textual data -- Processing multiple input data files -- Removing stop words -- Munging time series data -- Pre-processing of theamp -- #160 -- time-series Dataset -- Processing date fields -- Persisting and loading data -- Defining a date-time index -- Using theamp -- #160 -- amp -- #160 -- TimeSeriesRDDamp -- #160 -- object -- Handling missing time-series data -- Computing basic statistics -- Dealing with variable length records -- Converting variable-length records to fixed-length records -- Extracting data from "messy" columns -- Preparing data for machine learning -- Pre-processing data for machine learning -- Creating and running a machine learning pipeline -- Summary -- Chapter 5: Using Spark SQL in Streaming Applications -- Introducing streaming data applications -- Building Spark streaming applications -- Implementing sliding window-based functionality -- Joining a streaming Dataset with a static Dataset -- Using the Dataset API in Structured Streaming -- Using output sinks -- Using the Foreach Sink for arbitrary computations on output -- Using the Memory Sink to save output to a table -- Using the File Sink to save output to a partitioned table -- Monitoring streaming queries -- Using Kafka with Spark Structured Streaming -- Introducing Kafka concepts -- Introducing ZooKeeper concepts -- Introducing Kafka-Spark integration -- Introducing Kafka-Spark Structured Streaming -- Writing a receiver for a custom data source -- Summary -- Chapter 6: Using Spark SQL in Machine Learning Applications -- Introducing machine learning applications -- Understanding Spark ML pipelines and their components -- Understanding the steps in a pipeline application development process -- Introducing feature engineering -- Creating new features from raw data. Estimating the importance of a feature -- Understanding dimensionality reduction -- Deriving good features -- Implementing a Spark ML classification model -- Exploring the diabetes Dataset -- Pre-processing the data -- Building the Spark ML pipeline -- Using StringIndexer for indexing categorical features and labels -- Using VectorAssembler for assembling features into one column -- Using a Spark ML classifier -- Creating a Spark ML pipeline -- Creating the training and test Datasets -- Making predictions using the PipelineModel -- Selecting the best model -- Changing the ML algorithm in the pipeline -- Introducing Spark ML tools and utilities -- Using Principal Component Analysis to select features -- Using encoders -- Using Bucketizer -- Using VectorSlicer -- Using Chi-squared selector -- Using a Normalizer -- Retrieving our original labels -- Implementing a Spark ML clustering model -- Summary -- Chapter 7: Using Spark SQL in Graph Applications -- Introducing large-scale graph applications -- Exploring graphs using GraphFrames -- Constructing a GraphFrame -- Basic graph queries and operations -- Motif analysis using GraphFrames -- Processing subgraphs -- Applying graph algorithms -- Saving and loading GraphFrames -- Analyzing JSON input modeled as a graphamp -- #160 -- Processing graphs containing multiple types of relationships -- Understanding GraphFrame internals -- Viewing GraphFrame physical execution plan -- Understanding partitioning in GraphFrames -- Summary -- Chapter 8: Using Spark SQL with SparkR -- Introducing SparkR -- Understanding the SparkR architecture -- Understanding SparkR DataFrames -- Using SparkR for EDA and data munging tasks -- Reading and writing Spark DataFrames -- Exploring structure and contents of Spark DataFrames -- Running basic operations on Spark DataFrames -- Executing SQL statements on Spark DataFrames. Merging SparkR DataFrames -- Using User Defined Functions (UDFs) -- Using SparkR for computing summary statistics -- Using SparkR for data visualization -- Visualizing data on a map -- Visualizing graph nodes and edges -- Using SparkR for machine learning -- Summary -- Chapter 9: Developing Applications with Spark SQL -- Introducing Spark SQL applications -- Understanding text analysis applications -- Using Spark SQL for textual analysis -- Preprocessing textual data -- Computing readability -- Using word lists -- Creating data preprocessing pipelines -- Understanding themes in document corpuses -- Using Naive Bayes classifiers -- Developing a machine learning application -- Summary -- Chapter 10: Using Spark SQL in Deep Learning Applications -- Introducing neural networks -- Understanding deep learning -- Understanding representation learning -- Understanding stochastic gradient descent -- Introducing deep learning in Spark -- Introducing CaffeOnSpark -- Introducing DL4J -- Introducing TensorFrames -- Working with BigDL -- Tuning hyperparameters of deep learning models -- Introducing deep learning pipelines -- Understanding Supervised learning -- Understanding convolutional neural networks -- Using neural networks for text classification -- Using deep neural networks for language processing -- Understanding Recurrent Neural Networks -- Introducing autoencoders -- Summary -- Chapter 11: Tuning Spark SQL Components for Performance -- Introducing performance tuning in Spark SQL -- Understanding DataFrame/Dataset APIs -- Optimizing data serialization -- Understanding Catalyst optimizations -- Understanding the Dataset/DataFrame API -- Understanding Catalyst transformations -- Visualizing Spark application execution -- Exploring Spark application execution metrics -- Using external tools for performance tuning -- Cost-based optimizer in Apache Spark 2.2. Understanding theamp -- #160 -- CBO statistics collection -- Statistics collection functions -- Filter operator -- Join operator -- Build side selection -- Understanding multi-way JOIN ordering optimization -- Understanding performance improvements using whole-stage code generation -- Summary -- Chapter 12: Spark SQL in Large-Scale Application Architectures -- Understanding Spark-based application architectures -- Using Apache Spark for batch processing -- Using Apache Spark for stream processing -- Understanding the Lambda architecture -- Understanding the Kappa Architecture -- Design considerations for building scalable stream processing applications -- Building robust ETL pipelines using Spark SQL -- Choosing appropriate data formats -- Transforming data in ETL pipelines -- Addressing errors in ETL pipelines -- Implementing a scalable monitoring solution -- Deploying Spark machine learning pipelines -- Understanding the challenges in typical ML deployment environments -- Understanding types of model scoring architectures -- Using cluster managers -- Summary -- Index. |
ctrlnum | (OCoLC)1005351391 |
dewey-full | 006.312 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.312 |
dewey-search | 006.312 |
dewey-sort | 16.312 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>13622cam a2200613 i 4500</leader><controlfield tag="001">ZDB-4-EBA-on1005351391</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr unu||||||||</controlfield><controlfield tag="008">171005s2017 enka o 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">UMI</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">UMI</subfield><subfield code="d">STF</subfield><subfield code="d">N$T</subfield><subfield code="d">IDEBK</subfield><subfield code="d">OCLCF</subfield><subfield code="d">CEF</subfield><subfield code="d">KSU</subfield><subfield code="d">UAB</subfield><subfield code="d">K6U</subfield><subfield code="d">QGK</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">DXU</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781785887352</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1785887351</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781785888359</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1005351391</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">CL0500000899</subfield><subfield code="b">Safari Books Online</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.9.D343</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM</subfield><subfield code="x">000000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.312</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sarkar, Aurobindo,</subfield><subfield code="e">author.</subfield><subfield code="0">http://id.loc.gov/authorities/names/no2016019533</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Learning Spark SQL :</subfield><subfield code="b">architect streaming analytics and machine learning solutions /</subfield><subfield code="c">Aurobindo Sarkar.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham, UK :</subfield><subfield code="b">Packt Publishing,</subfield><subfield code="c">2017.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (1 volume) :</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Online resource; title from title page (viewed October 3, 2017).</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes index.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Design, implement, and deliver successful streaming applications, machine learning pipelines and graph applications using Spark SQL API About This Book Learn about the design and implementation of streaming applications, machine learning pipelines, deep learning, and large-scale graph processing applications using Spark SQL APIs and Scala. Learn data exploration, data munging, and how to process structured and semi-structured data using real-world datasets and gain hands-on exposure to the issues and challenges of working with noisy and "dirty" real-world data. Understand design considerations for scalability and performance in web-scale Spark application architectures. Who This Book Is For If you are a developer, engineer, or an architect and want to learn how to use Apache Spark in a web-scale project, then this is the book for you. It is assumed that you have prior knowledge of SQL querying. A basic programming knowledge with Scala, Java, R, or Python is all you need to get started with this book. What You Will Learn Familiarize yourself with Spark SQL programming, including working with DataFrame/Dataset API and SQL Perform a series of hands-on exercises with different types of data sources, including CSV, JSON, Avro, MySQL, and MongoDB Perform data quality checks, data visualization, and basic statistical analysis tasks Perform data munging tasks on publically available datasets Learn how to use Spark SQL and Apache Kafka to build streaming applications Learn key performance-tuning tips and tricks in Spark SQL applications Learn key architectural components and patterns in large-scale Spark SQL applications In Detail In the past year, Apache Spark has been increasingly adopted for the development of distributed applications. Spark SQL APIs provide an optimized interface that helps developers build such applications quickly and easily. However, designing web-scale production applications using Spark SQL APIs can be a complex task. Hence, understanding the design and implementation best practices before you start your project will help you avoid these problems. This book gives an insight into the engineering practices used to design and build real-world, Spark-based applications. The book's hands-on examples will give you the required confidence to work on any future projects you encounter in Spark SQL. It starts by familiarizing you with data exploration and data munging tasks using Spark SQL and Scala. Extensive code examples will help yo ...</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Title Page -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Getting Started with Spark SQL -- What is Spark SQL? -- Introducing SparkSession -- Understanding Spark SQL concepts -- Understanding Resilient Distributed Datasets (RDDs) -- Understanding DataFrames and Datasets -- Understanding the Catalyst optimizer -- Understanding Catalyst optimizations -- Understanding Catalyst transformations -- Introducing Project Tungsten -- Using Spark SQL in streaming applications -- Understanding Structured Streaming internals -- Summary -- Chapter 2: Using Spark SQL for Processing Structured and Semistructured Data -- Understanding data sources in Spark applications -- Selecting Spark data sources -- Using Spark with relational databases -- Using Spark with MongoDB (NoSQL database) -- Using Spark with JSON data -- Using Spark with Avro files -- Using Spark with Parquet files -- Defining and using custom data sources in Spark -- Summary -- Chapter 3: Using Spark SQL for Data Exploration -- Introducing Exploratory Data Analysis (EDA) -- Using Spark SQL for basic data analysis -- Identifying missing data -- Computing basic statistics -- Identifying data outliers -- Visualizing data with Apache Zeppelin -- Sampling data with Spark SQL APIs -- Sampling with the DataFrame/Dataset API -- Sampling with the RDD API -- Using Spark SQL for creating pivot tables -- Summary -- Chapter 4: Using Spark SQL for Data Munging -- Introducing data munging -- Exploring data munging techniques -- Pre-processing of theamp -- #160 -- household electric consumption Dataset -- Computing basic statistics and aggregations -- Augmenting the Dataset -- Executing other miscellaneous processing steps -- Pre-processing ofamp -- #160 -- the weather Dataset.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Analyzing missing data -- Combining data using a JOIN operation -- Munging textual data -- Processing multiple input data files -- Removing stop words -- Munging time series data -- Pre-processing of theamp -- #160 -- time-series Dataset -- Processing date fields -- Persisting and loading data -- Defining a date-time index -- Using theamp -- #160 -- amp -- #160 -- TimeSeriesRDDamp -- #160 -- object -- Handling missing time-series data -- Computing basic statistics -- Dealing with variable length records -- Converting variable-length records to fixed-length records -- Extracting data from "messy" columns -- Preparing data for machine learning -- Pre-processing data for machine learning -- Creating and running a machine learning pipeline -- Summary -- Chapter 5: Using Spark SQL in Streaming Applications -- Introducing streaming data applications -- Building Spark streaming applications -- Implementing sliding window-based functionality -- Joining a streaming Dataset with a static Dataset -- Using the Dataset API in Structured Streaming -- Using output sinks -- Using the Foreach Sink for arbitrary computations on output -- Using the Memory Sink to save output to a table -- Using the File Sink to save output to a partitioned table -- Monitoring streaming queries -- Using Kafka with Spark Structured Streaming -- Introducing Kafka concepts -- Introducing ZooKeeper concepts -- Introducing Kafka-Spark integration -- Introducing Kafka-Spark Structured Streaming -- Writing a receiver for a custom data source -- Summary -- Chapter 6: Using Spark SQL in Machine Learning Applications -- Introducing machine learning applications -- Understanding Spark ML pipelines and their components -- Understanding the steps in a pipeline application development process -- Introducing feature engineering -- Creating new features from raw data.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Estimating the importance of a feature -- Understanding dimensionality reduction -- Deriving good features -- Implementing a Spark ML classification model -- Exploring the diabetes Dataset -- Pre-processing the data -- Building the Spark ML pipeline -- Using StringIndexer for indexing categorical features and labels -- Using VectorAssembler for assembling features into one column -- Using a Spark ML classifier -- Creating a Spark ML pipeline -- Creating the training and test Datasets -- Making predictions using the PipelineModel -- Selecting the best model -- Changing the ML algorithm in the pipeline -- Introducing Spark ML tools and utilities -- Using Principal Component Analysis to select features -- Using encoders -- Using Bucketizer -- Using VectorSlicer -- Using Chi-squared selector -- Using a Normalizer -- Retrieving our original labels -- Implementing a Spark ML clustering model -- Summary -- Chapter 7: Using Spark SQL in Graph Applications -- Introducing large-scale graph applications -- Exploring graphs using GraphFrames -- Constructing a GraphFrame -- Basic graph queries and operations -- Motif analysis using GraphFrames -- Processing subgraphs -- Applying graph algorithms -- Saving and loading GraphFrames -- Analyzing JSON input modeled as a graphamp -- #160 -- Processing graphs containing multiple types of relationships -- Understanding GraphFrame internals -- Viewing GraphFrame physical execution plan -- Understanding partitioning in GraphFrames -- Summary -- Chapter 8: Using Spark SQL with SparkR -- Introducing SparkR -- Understanding the SparkR architecture -- Understanding SparkR DataFrames -- Using SparkR for EDA and data munging tasks -- Reading and writing Spark DataFrames -- Exploring structure and contents of Spark DataFrames -- Running basic operations on Spark DataFrames -- Executing SQL statements on Spark DataFrames.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Merging SparkR DataFrames -- Using User Defined Functions (UDFs) -- Using SparkR for computing summary statistics -- Using SparkR for data visualization -- Visualizing data on a map -- Visualizing graph nodes and edges -- Using SparkR for machine learning -- Summary -- Chapter 9: Developing Applications with Spark SQL -- Introducing Spark SQL applications -- Understanding text analysis applications -- Using Spark SQL for textual analysis -- Preprocessing textual data -- Computing readability -- Using word lists -- Creating data preprocessing pipelines -- Understanding themes in document corpuses -- Using Naive Bayes classifiers -- Developing a machine learning application -- Summary -- Chapter 10: Using Spark SQL in Deep Learning Applications -- Introducing neural networks -- Understanding deep learning -- Understanding representation learning -- Understanding stochastic gradient descent -- Introducing deep learning in Spark -- Introducing CaffeOnSpark -- Introducing DL4J -- Introducing TensorFrames -- Working with BigDL -- Tuning hyperparameters of deep learning models -- Introducing deep learning pipelines -- Understanding Supervised learning -- Understanding convolutional neural networks -- Using neural networks for text classification -- Using deep neural networks for language processing -- Understanding Recurrent Neural Networks -- Introducing autoencoders -- Summary -- Chapter 11: Tuning Spark SQL Components for Performance -- Introducing performance tuning in Spark SQL -- Understanding DataFrame/Dataset APIs -- Optimizing data serialization -- Understanding Catalyst optimizations -- Understanding the Dataset/DataFrame API -- Understanding Catalyst transformations -- Visualizing Spark application execution -- Exploring Spark application execution metrics -- Using external tools for performance tuning -- Cost-based optimizer in Apache Spark 2.2.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Understanding theamp -- #160 -- CBO statistics collection -- Statistics collection functions -- Filter operator -- Join operator -- Build side selection -- Understanding multi-way JOIN ordering optimization -- Understanding performance improvements using whole-stage code generation -- Summary -- Chapter 12: Spark SQL in Large-Scale Application Architectures -- Understanding Spark-based application architectures -- Using Apache Spark for batch processing -- Using Apache Spark for stream processing -- Understanding the Lambda architecture -- Understanding the Kappa Architecture -- Design considerations for building scalable stream processing applications -- Building robust ETL pipelines using Spark SQL -- Choosing appropriate data formats -- Transforming data in ETL pipelines -- Addressing errors in ETL pipelines -- Implementing a scalable monitoring solution -- Deploying Spark machine learning pipelines -- Understanding the challenges in typical ML deployment environments -- Understanding types of model scoring architectures -- Using cluster managers -- Summary -- Index.</subfield></datafield><datafield tag="630" ind1="0" ind2="0"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="0">http://id.loc.gov/authorities/names/no2015027445</subfield></datafield><datafield tag="630" ind1="0" ind2="7"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Data mining.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh97002073</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Big data.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh2012003227</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Application software</subfield><subfield code="x">Development.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh95009362</subfield></datafield><datafield tag="650" ind1=" " ind2="2"><subfield code="a">Data Mining</subfield><subfield code="0">https://id.nlm.nih.gov/mesh/D057225</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Exploration de données (Informatique)</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Données volumineuses.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Logiciels d'application</subfield><subfield code="x">Développement.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">General.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Application software</subfield><subfield code="x">Development</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Big data</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Data mining</subfield><subfield code="2">fast</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">Learning Spark SQL (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCFxPWPxDQk4hjkjBRr6V83</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1592147</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1592147</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis38868004</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-on1005351391 |
illustrated | Illustrated |
indexdate | 2024-11-27T13:28:02Z |
institution | BVB |
isbn | 9781785887352 1785887351 |
language | English |
oclc_num | 1005351391 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (1 volume) : illustrations |
psigel | ZDB-4-EBA |
publishDate | 2017 |
publishDateSearch | 2017 |
publishDateSort | 2017 |
publisher | Packt Publishing, |
record_format | marc |
spelling | Sarkar, Aurobindo, author. http://id.loc.gov/authorities/names/no2016019533 Learning Spark SQL : architect streaming analytics and machine learning solutions / Aurobindo Sarkar. Birmingham, UK : Packt Publishing, 2017. 1 online resource (1 volume) : illustrations text txt rdacontent computer c rdamedia online resource cr rdacarrier Online resource; title from title page (viewed October 3, 2017). Includes index. Design, implement, and deliver successful streaming applications, machine learning pipelines and graph applications using Spark SQL API About This Book Learn about the design and implementation of streaming applications, machine learning pipelines, deep learning, and large-scale graph processing applications using Spark SQL APIs and Scala. Learn data exploration, data munging, and how to process structured and semi-structured data using real-world datasets and gain hands-on exposure to the issues and challenges of working with noisy and "dirty" real-world data. Understand design considerations for scalability and performance in web-scale Spark application architectures. Who This Book Is For If you are a developer, engineer, or an architect and want to learn how to use Apache Spark in a web-scale project, then this is the book for you. It is assumed that you have prior knowledge of SQL querying. A basic programming knowledge with Scala, Java, R, or Python is all you need to get started with this book. What You Will Learn Familiarize yourself with Spark SQL programming, including working with DataFrame/Dataset API and SQL Perform a series of hands-on exercises with different types of data sources, including CSV, JSON, Avro, MySQL, and MongoDB Perform data quality checks, data visualization, and basic statistical analysis tasks Perform data munging tasks on publically available datasets Learn how to use Spark SQL and Apache Kafka to build streaming applications Learn key performance-tuning tips and tricks in Spark SQL applications Learn key architectural components and patterns in large-scale Spark SQL applications In Detail In the past year, Apache Spark has been increasingly adopted for the development of distributed applications. Spark SQL APIs provide an optimized interface that helps developers build such applications quickly and easily. However, designing web-scale production applications using Spark SQL APIs can be a complex task. Hence, understanding the design and implementation best practices before you start your project will help you avoid these problems. This book gives an insight into the engineering practices used to design and build real-world, Spark-based applications. The book's hands-on examples will give you the required confidence to work on any future projects you encounter in Spark SQL. It starts by familiarizing you with data exploration and data munging tasks using Spark SQL and Scala. Extensive code examples will help yo ... Cover -- Title Page -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Getting Started with Spark SQL -- What is Spark SQL? -- Introducing SparkSession -- Understanding Spark SQL concepts -- Understanding Resilient Distributed Datasets (RDDs) -- Understanding DataFrames and Datasets -- Understanding the Catalyst optimizer -- Understanding Catalyst optimizations -- Understanding Catalyst transformations -- Introducing Project Tungsten -- Using Spark SQL in streaming applications -- Understanding Structured Streaming internals -- Summary -- Chapter 2: Using Spark SQL for Processing Structured and Semistructured Data -- Understanding data sources in Spark applications -- Selecting Spark data sources -- Using Spark with relational databases -- Using Spark with MongoDB (NoSQL database) -- Using Spark with JSON data -- Using Spark with Avro files -- Using Spark with Parquet files -- Defining and using custom data sources in Spark -- Summary -- Chapter 3: Using Spark SQL for Data Exploration -- Introducing Exploratory Data Analysis (EDA) -- Using Spark SQL for basic data analysis -- Identifying missing data -- Computing basic statistics -- Identifying data outliers -- Visualizing data with Apache Zeppelin -- Sampling data with Spark SQL APIs -- Sampling with the DataFrame/Dataset API -- Sampling with the RDD API -- Using Spark SQL for creating pivot tables -- Summary -- Chapter 4: Using Spark SQL for Data Munging -- Introducing data munging -- Exploring data munging techniques -- Pre-processing of theamp -- #160 -- household electric consumption Dataset -- Computing basic statistics and aggregations -- Augmenting the Dataset -- Executing other miscellaneous processing steps -- Pre-processing ofamp -- #160 -- the weather Dataset. Analyzing missing data -- Combining data using a JOIN operation -- Munging textual data -- Processing multiple input data files -- Removing stop words -- Munging time series data -- Pre-processing of theamp -- #160 -- time-series Dataset -- Processing date fields -- Persisting and loading data -- Defining a date-time index -- Using theamp -- #160 -- amp -- #160 -- TimeSeriesRDDamp -- #160 -- object -- Handling missing time-series data -- Computing basic statistics -- Dealing with variable length records -- Converting variable-length records to fixed-length records -- Extracting data from "messy" columns -- Preparing data for machine learning -- Pre-processing data for machine learning -- Creating and running a machine learning pipeline -- Summary -- Chapter 5: Using Spark SQL in Streaming Applications -- Introducing streaming data applications -- Building Spark streaming applications -- Implementing sliding window-based functionality -- Joining a streaming Dataset with a static Dataset -- Using the Dataset API in Structured Streaming -- Using output sinks -- Using the Foreach Sink for arbitrary computations on output -- Using the Memory Sink to save output to a table -- Using the File Sink to save output to a partitioned table -- Monitoring streaming queries -- Using Kafka with Spark Structured Streaming -- Introducing Kafka concepts -- Introducing ZooKeeper concepts -- Introducing Kafka-Spark integration -- Introducing Kafka-Spark Structured Streaming -- Writing a receiver for a custom data source -- Summary -- Chapter 6: Using Spark SQL in Machine Learning Applications -- Introducing machine learning applications -- Understanding Spark ML pipelines and their components -- Understanding the steps in a pipeline application development process -- Introducing feature engineering -- Creating new features from raw data. Estimating the importance of a feature -- Understanding dimensionality reduction -- Deriving good features -- Implementing a Spark ML classification model -- Exploring the diabetes Dataset -- Pre-processing the data -- Building the Spark ML pipeline -- Using StringIndexer for indexing categorical features and labels -- Using VectorAssembler for assembling features into one column -- Using a Spark ML classifier -- Creating a Spark ML pipeline -- Creating the training and test Datasets -- Making predictions using the PipelineModel -- Selecting the best model -- Changing the ML algorithm in the pipeline -- Introducing Spark ML tools and utilities -- Using Principal Component Analysis to select features -- Using encoders -- Using Bucketizer -- Using VectorSlicer -- Using Chi-squared selector -- Using a Normalizer -- Retrieving our original labels -- Implementing a Spark ML clustering model -- Summary -- Chapter 7: Using Spark SQL in Graph Applications -- Introducing large-scale graph applications -- Exploring graphs using GraphFrames -- Constructing a GraphFrame -- Basic graph queries and operations -- Motif analysis using GraphFrames -- Processing subgraphs -- Applying graph algorithms -- Saving and loading GraphFrames -- Analyzing JSON input modeled as a graphamp -- #160 -- Processing graphs containing multiple types of relationships -- Understanding GraphFrame internals -- Viewing GraphFrame physical execution plan -- Understanding partitioning in GraphFrames -- Summary -- Chapter 8: Using Spark SQL with SparkR -- Introducing SparkR -- Understanding the SparkR architecture -- Understanding SparkR DataFrames -- Using SparkR for EDA and data munging tasks -- Reading and writing Spark DataFrames -- Exploring structure and contents of Spark DataFrames -- Running basic operations on Spark DataFrames -- Executing SQL statements on Spark DataFrames. Merging SparkR DataFrames -- Using User Defined Functions (UDFs) -- Using SparkR for computing summary statistics -- Using SparkR for data visualization -- Visualizing data on a map -- Visualizing graph nodes and edges -- Using SparkR for machine learning -- Summary -- Chapter 9: Developing Applications with Spark SQL -- Introducing Spark SQL applications -- Understanding text analysis applications -- Using Spark SQL for textual analysis -- Preprocessing textual data -- Computing readability -- Using word lists -- Creating data preprocessing pipelines -- Understanding themes in document corpuses -- Using Naive Bayes classifiers -- Developing a machine learning application -- Summary -- Chapter 10: Using Spark SQL in Deep Learning Applications -- Introducing neural networks -- Understanding deep learning -- Understanding representation learning -- Understanding stochastic gradient descent -- Introducing deep learning in Spark -- Introducing CaffeOnSpark -- Introducing DL4J -- Introducing TensorFrames -- Working with BigDL -- Tuning hyperparameters of deep learning models -- Introducing deep learning pipelines -- Understanding Supervised learning -- Understanding convolutional neural networks -- Using neural networks for text classification -- Using deep neural networks for language processing -- Understanding Recurrent Neural Networks -- Introducing autoencoders -- Summary -- Chapter 11: Tuning Spark SQL Components for Performance -- Introducing performance tuning in Spark SQL -- Understanding DataFrame/Dataset APIs -- Optimizing data serialization -- Understanding Catalyst optimizations -- Understanding the Dataset/DataFrame API -- Understanding Catalyst transformations -- Visualizing Spark application execution -- Exploring Spark application execution metrics -- Using external tools for performance tuning -- Cost-based optimizer in Apache Spark 2.2. Understanding theamp -- #160 -- CBO statistics collection -- Statistics collection functions -- Filter operator -- Join operator -- Build side selection -- Understanding multi-way JOIN ordering optimization -- Understanding performance improvements using whole-stage code generation -- Summary -- Chapter 12: Spark SQL in Large-Scale Application Architectures -- Understanding Spark-based application architectures -- Using Apache Spark for batch processing -- Using Apache Spark for stream processing -- Understanding the Lambda architecture -- Understanding the Kappa Architecture -- Design considerations for building scalable stream processing applications -- Building robust ETL pipelines using Spark SQL -- Choosing appropriate data formats -- Transforming data in ETL pipelines -- Addressing errors in ETL pipelines -- Implementing a scalable monitoring solution -- Deploying Spark machine learning pipelines -- Understanding the challenges in typical ML deployment environments -- Understanding types of model scoring architectures -- Using cluster managers -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Data mining. http://id.loc.gov/authorities/subjects/sh97002073 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Application software Development. http://id.loc.gov/authorities/subjects/sh95009362 Data Mining https://id.nlm.nih.gov/mesh/D057225 Exploration de données (Informatique) Données volumineuses. Logiciels d'application Développement. COMPUTERS General. bisacsh Application software Development fast Big data fast Data mining fast has work: Learning Spark SQL (Text) https://id.oclc.org/worldcat/entity/E39PCFxPWPxDQk4hjkjBRr6V83 https://id.oclc.org/worldcat/ontology/hasWork FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1592147 Volltext |
spellingShingle | Sarkar, Aurobindo Learning Spark SQL : architect streaming analytics and machine learning solutions / Cover -- Title Page -- Copyright -- Credits -- About the Author -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Getting Started with Spark SQL -- What is Spark SQL? -- Introducing SparkSession -- Understanding Spark SQL concepts -- Understanding Resilient Distributed Datasets (RDDs) -- Understanding DataFrames and Datasets -- Understanding the Catalyst optimizer -- Understanding Catalyst optimizations -- Understanding Catalyst transformations -- Introducing Project Tungsten -- Using Spark SQL in streaming applications -- Understanding Structured Streaming internals -- Summary -- Chapter 2: Using Spark SQL for Processing Structured and Semistructured Data -- Understanding data sources in Spark applications -- Selecting Spark data sources -- Using Spark with relational databases -- Using Spark with MongoDB (NoSQL database) -- Using Spark with JSON data -- Using Spark with Avro files -- Using Spark with Parquet files -- Defining and using custom data sources in Spark -- Summary -- Chapter 3: Using Spark SQL for Data Exploration -- Introducing Exploratory Data Analysis (EDA) -- Using Spark SQL for basic data analysis -- Identifying missing data -- Computing basic statistics -- Identifying data outliers -- Visualizing data with Apache Zeppelin -- Sampling data with Spark SQL APIs -- Sampling with the DataFrame/Dataset API -- Sampling with the RDD API -- Using Spark SQL for creating pivot tables -- Summary -- Chapter 4: Using Spark SQL for Data Munging -- Introducing data munging -- Exploring data munging techniques -- Pre-processing of theamp -- #160 -- household electric consumption Dataset -- Computing basic statistics and aggregations -- Augmenting the Dataset -- Executing other miscellaneous processing steps -- Pre-processing ofamp -- #160 -- the weather Dataset. Analyzing missing data -- Combining data using a JOIN operation -- Munging textual data -- Processing multiple input data files -- Removing stop words -- Munging time series data -- Pre-processing of theamp -- #160 -- time-series Dataset -- Processing date fields -- Persisting and loading data -- Defining a date-time index -- Using theamp -- #160 -- amp -- #160 -- TimeSeriesRDDamp -- #160 -- object -- Handling missing time-series data -- Computing basic statistics -- Dealing with variable length records -- Converting variable-length records to fixed-length records -- Extracting data from "messy" columns -- Preparing data for machine learning -- Pre-processing data for machine learning -- Creating and running a machine learning pipeline -- Summary -- Chapter 5: Using Spark SQL in Streaming Applications -- Introducing streaming data applications -- Building Spark streaming applications -- Implementing sliding window-based functionality -- Joining a streaming Dataset with a static Dataset -- Using the Dataset API in Structured Streaming -- Using output sinks -- Using the Foreach Sink for arbitrary computations on output -- Using the Memory Sink to save output to a table -- Using the File Sink to save output to a partitioned table -- Monitoring streaming queries -- Using Kafka with Spark Structured Streaming -- Introducing Kafka concepts -- Introducing ZooKeeper concepts -- Introducing Kafka-Spark integration -- Introducing Kafka-Spark Structured Streaming -- Writing a receiver for a custom data source -- Summary -- Chapter 6: Using Spark SQL in Machine Learning Applications -- Introducing machine learning applications -- Understanding Spark ML pipelines and their components -- Understanding the steps in a pipeline application development process -- Introducing feature engineering -- Creating new features from raw data. Estimating the importance of a feature -- Understanding dimensionality reduction -- Deriving good features -- Implementing a Spark ML classification model -- Exploring the diabetes Dataset -- Pre-processing the data -- Building the Spark ML pipeline -- Using StringIndexer for indexing categorical features and labels -- Using VectorAssembler for assembling features into one column -- Using a Spark ML classifier -- Creating a Spark ML pipeline -- Creating the training and test Datasets -- Making predictions using the PipelineModel -- Selecting the best model -- Changing the ML algorithm in the pipeline -- Introducing Spark ML tools and utilities -- Using Principal Component Analysis to select features -- Using encoders -- Using Bucketizer -- Using VectorSlicer -- Using Chi-squared selector -- Using a Normalizer -- Retrieving our original labels -- Implementing a Spark ML clustering model -- Summary -- Chapter 7: Using Spark SQL in Graph Applications -- Introducing large-scale graph applications -- Exploring graphs using GraphFrames -- Constructing a GraphFrame -- Basic graph queries and operations -- Motif analysis using GraphFrames -- Processing subgraphs -- Applying graph algorithms -- Saving and loading GraphFrames -- Analyzing JSON input modeled as a graphamp -- #160 -- Processing graphs containing multiple types of relationships -- Understanding GraphFrame internals -- Viewing GraphFrame physical execution plan -- Understanding partitioning in GraphFrames -- Summary -- Chapter 8: Using Spark SQL with SparkR -- Introducing SparkR -- Understanding the SparkR architecture -- Understanding SparkR DataFrames -- Using SparkR for EDA and data munging tasks -- Reading and writing Spark DataFrames -- Exploring structure and contents of Spark DataFrames -- Running basic operations on Spark DataFrames -- Executing SQL statements on Spark DataFrames. Merging SparkR DataFrames -- Using User Defined Functions (UDFs) -- Using SparkR for computing summary statistics -- Using SparkR for data visualization -- Visualizing data on a map -- Visualizing graph nodes and edges -- Using SparkR for machine learning -- Summary -- Chapter 9: Developing Applications with Spark SQL -- Introducing Spark SQL applications -- Understanding text analysis applications -- Using Spark SQL for textual analysis -- Preprocessing textual data -- Computing readability -- Using word lists -- Creating data preprocessing pipelines -- Understanding themes in document corpuses -- Using Naive Bayes classifiers -- Developing a machine learning application -- Summary -- Chapter 10: Using Spark SQL in Deep Learning Applications -- Introducing neural networks -- Understanding deep learning -- Understanding representation learning -- Understanding stochastic gradient descent -- Introducing deep learning in Spark -- Introducing CaffeOnSpark -- Introducing DL4J -- Introducing TensorFrames -- Working with BigDL -- Tuning hyperparameters of deep learning models -- Introducing deep learning pipelines -- Understanding Supervised learning -- Understanding convolutional neural networks -- Using neural networks for text classification -- Using deep neural networks for language processing -- Understanding Recurrent Neural Networks -- Introducing autoencoders -- Summary -- Chapter 11: Tuning Spark SQL Components for Performance -- Introducing performance tuning in Spark SQL -- Understanding DataFrame/Dataset APIs -- Optimizing data serialization -- Understanding Catalyst optimizations -- Understanding the Dataset/DataFrame API -- Understanding Catalyst transformations -- Visualizing Spark application execution -- Exploring Spark application execution metrics -- Using external tools for performance tuning -- Cost-based optimizer in Apache Spark 2.2. Understanding theamp -- #160 -- CBO statistics collection -- Statistics collection functions -- Filter operator -- Join operator -- Build side selection -- Understanding multi-way JOIN ordering optimization -- Understanding performance improvements using whole-stage code generation -- Summary -- Chapter 12: Spark SQL in Large-Scale Application Architectures -- Understanding Spark-based application architectures -- Using Apache Spark for batch processing -- Using Apache Spark for stream processing -- Understanding the Lambda architecture -- Understanding the Kappa Architecture -- Design considerations for building scalable stream processing applications -- Building robust ETL pipelines using Spark SQL -- Choosing appropriate data formats -- Transforming data in ETL pipelines -- Addressing errors in ETL pipelines -- Implementing a scalable monitoring solution -- Deploying Spark machine learning pipelines -- Understanding the challenges in typical ML deployment environments -- Understanding types of model scoring architectures -- Using cluster managers -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Data mining. http://id.loc.gov/authorities/subjects/sh97002073 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Application software Development. http://id.loc.gov/authorities/subjects/sh95009362 Data Mining https://id.nlm.nih.gov/mesh/D057225 Exploration de données (Informatique) Données volumineuses. Logiciels d'application Développement. COMPUTERS General. bisacsh Application software Development fast Big data fast Data mining fast |
subject_GND | http://id.loc.gov/authorities/names/no2015027445 http://id.loc.gov/authorities/subjects/sh97002073 http://id.loc.gov/authorities/subjects/sh2012003227 http://id.loc.gov/authorities/subjects/sh95009362 https://id.nlm.nih.gov/mesh/D057225 |
title | Learning Spark SQL : architect streaming analytics and machine learning solutions / |
title_auth | Learning Spark SQL : architect streaming analytics and machine learning solutions / |
title_exact_search | Learning Spark SQL : architect streaming analytics and machine learning solutions / |
title_full | Learning Spark SQL : architect streaming analytics and machine learning solutions / Aurobindo Sarkar. |
title_fullStr | Learning Spark SQL : architect streaming analytics and machine learning solutions / Aurobindo Sarkar. |
title_full_unstemmed | Learning Spark SQL : architect streaming analytics and machine learning solutions / Aurobindo Sarkar. |
title_short | Learning Spark SQL : |
title_sort | learning spark sql architect streaming analytics and machine learning solutions |
title_sub | architect streaming analytics and machine learning solutions / |
topic | Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Data mining. http://id.loc.gov/authorities/subjects/sh97002073 Big data. http://id.loc.gov/authorities/subjects/sh2012003227 Application software Development. http://id.loc.gov/authorities/subjects/sh95009362 Data Mining https://id.nlm.nih.gov/mesh/D057225 Exploration de données (Informatique) Données volumineuses. Logiciels d'application Développement. COMPUTERS General. bisacsh Application software Development fast Big data fast Data mining fast |
topic_facet | Spark (Electronic resource : Apache Software Foundation) Data mining. Big data. Application software Development. Data Mining Exploration de données (Informatique) Données volumineuses. Logiciels d'application Développement. COMPUTERS General. Application software Development Big data Data mining |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1592147 |
work_keys_str_mv | AT sarkaraurobindo learningsparksqlarchitectstreaminganalyticsandmachinelearningsolutions |