Big Data Processing with Apache Spark: Efficiently tackle large datasets and big data analysis with Spark and Python
bNo need to spend hours ploughing through endless data - let Spark, one of the fastest big data processing engines available, do the hard work for you./b h4Key Features/h4 ulliGet up and running with Apache Spark and Python /li liIntegrate Spark with AWS for real-time analytics /li liApply processed...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham
Packt Publishing Limited
2018
|
Ausgabe: | 1 |
Schlagworte: | |
Zusammenfassung: | bNo need to spend hours ploughing through endless data - let Spark, one of the fastest big data processing engines available, do the hard work for you./b h4Key Features/h4 ulliGet up and running with Apache Spark and Python /li liIntegrate Spark with AWS for real-time analytics /li liApply processed data streams to machine learning APIs of Apache Spark /li /ul h4Book Description/h4 Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. This book teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this book, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects. h4What you will learn/h4 ulliWrite your own Python programs that can interact with Spark /li liImplement data stream consumption using Apache Spark /li liRecognize common operations in Spark to process known data streams /li liIntegrate Spark streaming with Amazon Web Services (AWS) /li liCreate a collaborative filtering model with the movielens dataset /li liApply processed data streams to Spark machine learning APIs/li/ul h4Who this book is for/h4 Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don't need any knowledge of Spark, prior experience of working with Python is recommended |
Beschreibung: | 1 Online-Ressource (142 Seiten) |
ISBN: | 9781789804522 |
Internformat
MARC
LEADER | 00000nmm a2200000zc 4500 | ||
---|---|---|---|
001 | BV047069745 | ||
003 | DE-604 | ||
005 | 20211214 | ||
007 | cr|uuu---uuuuu | ||
008 | 201218s2018 |||| o||u| ||||||eng d | ||
020 | |a 9781789804522 |9 978-1-78980-452-2 | ||
035 | |a (ZDB-5-WPSE)9781789804522142 | ||
035 | |a (OCoLC)1227478039 | ||
035 | |a (DE-599)BVBBV047069745 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
100 | 1 | |a Franco Galeano, Manuel Ignacio |e Verfasser |4 aut | |
245 | 1 | 0 | |a Big Data Processing with Apache Spark |b Efficiently tackle large datasets and big data analysis with Spark and Python |c Franco Galeano, Manuel Ignacio |
250 | |a 1 | ||
264 | 1 | |a Birmingham |b Packt Publishing Limited |c 2018 | |
300 | |a 1 Online-Ressource (142 Seiten) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
520 | |a bNo need to spend hours ploughing through endless data - let Spark, one of the fastest big data processing engines available, do the hard work for you./b h4Key Features/h4 ulliGet up and running with Apache Spark and Python /li liIntegrate Spark with AWS for real-time analytics /li liApply processed data streams to machine learning APIs of Apache Spark /li /ul h4Book Description/h4 Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. This book teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. | ||
520 | |a After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this book, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects. | ||
520 | |a h4What you will learn/h4 ulliWrite your own Python programs that can interact with Spark /li liImplement data stream consumption using Apache Spark /li liRecognize common operations in Spark to process known data streams /li liIntegrate Spark streaming with Amazon Web Services (AWS) /li liCreate a collaborative filtering model with the movielens dataset /li liApply processed data streams to Spark machine learning APIs/li/ul h4Who this book is for/h4 Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don't need any knowledge of Spark, prior experience of working with Python is recommended | ||
650 | 4 | |a COMPUTERS / Data Processing | |
650 | 4 | |a COMPUTERS / Data Visualization | |
912 | |a ZDB-5-WPSE | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-032476771 |
Datensatz im Suchindex
_version_ | 1804182071890935808 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Franco Galeano, Manuel Ignacio |
author_facet | Franco Galeano, Manuel Ignacio |
author_role | aut |
author_sort | Franco Galeano, Manuel Ignacio |
author_variant | g m i f gmi gmif |
building | Verbundindex |
bvnumber | BV047069745 |
collection | ZDB-5-WPSE |
ctrlnum | (ZDB-5-WPSE)9781789804522142 (OCoLC)1227478039 (DE-599)BVBBV047069745 |
edition | 1 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03097nmm a2200337zc 4500</leader><controlfield tag="001">BV047069745</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20211214 </controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">201218s2018 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781789804522</subfield><subfield code="9">978-1-78980-452-2</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-5-WPSE)9781789804522142</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1227478039</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV047069745</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Franco Galeano, Manuel Ignacio</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Big Data Processing with Apache Spark</subfield><subfield code="b">Efficiently tackle large datasets and big data analysis with Spark and Python</subfield><subfield code="c">Franco Galeano, Manuel Ignacio</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham</subfield><subfield code="b">Packt Publishing Limited</subfield><subfield code="c">2018</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (142 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">bNo need to spend hours ploughing through endless data - let Spark, one of the fastest big data processing engines available, do the hard work for you./b h4Key Features/h4 ulliGet up and running with Apache Spark and Python /li liIntegrate Spark with AWS for real-time analytics /li liApply processed data streams to machine learning APIs of Apache Spark /li /ul h4Book Description/h4 Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. This book teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this book, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">h4What you will learn/h4 ulliWrite your own Python programs that can interact with Spark /li liImplement data stream consumption using Apache Spark /li liRecognize common operations in Spark to process known data streams /li liIntegrate Spark streaming with Amazon Web Services (AWS) /li liCreate a collaborative filtering model with the movielens dataset /li liApply processed data streams to Spark machine learning APIs/li/ul h4Who this book is for/h4 Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don't need any knowledge of Spark, prior experience of working with Python is recommended</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">COMPUTERS / Data Processing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">COMPUTERS / Data Visualization</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-5-WPSE</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032476771</subfield></datafield></record></collection> |
id | DE-604.BV047069745 |
illustrated | Not Illustrated |
index_date | 2024-07-03T16:13:33Z |
indexdate | 2024-07-10T09:01:44Z |
institution | BVB |
isbn | 9781789804522 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032476771 |
oclc_num | 1227478039 |
open_access_boolean | |
physical | 1 Online-Ressource (142 Seiten) |
psigel | ZDB-5-WPSE |
publishDate | 2018 |
publishDateSearch | 2018 |
publishDateSort | 2018 |
publisher | Packt Publishing Limited |
record_format | marc |
spelling | Franco Galeano, Manuel Ignacio Verfasser aut Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python Franco Galeano, Manuel Ignacio 1 Birmingham Packt Publishing Limited 2018 1 Online-Ressource (142 Seiten) txt rdacontent c rdamedia cr rdacarrier bNo need to spend hours ploughing through endless data - let Spark, one of the fastest big data processing engines available, do the hard work for you./b h4Key Features/h4 ulliGet up and running with Apache Spark and Python /li liIntegrate Spark with AWS for real-time analytics /li liApply processed data streams to machine learning APIs of Apache Spark /li /ul h4Book Description/h4 Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. This book teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming. You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption. By the end of this book, you'll not only have understood how to use machine learning extensions and structured streams but you'll also be able to apply Spark in your own upcoming big data projects. h4What you will learn/h4 ulliWrite your own Python programs that can interact with Spark /li liImplement data stream consumption using Apache Spark /li liRecognize common operations in Spark to process known data streams /li liIntegrate Spark streaming with Amazon Web Services (AWS) /li liCreate a collaborative filtering model with the movielens dataset /li liApply processed data streams to Spark machine learning APIs/li/ul h4Who this book is for/h4 Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don't need any knowledge of Spark, prior experience of working with Python is recommended COMPUTERS / Data Processing COMPUTERS / Data Visualization |
spellingShingle | Franco Galeano, Manuel Ignacio Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python COMPUTERS / Data Processing COMPUTERS / Data Visualization |
title | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python |
title_auth | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python |
title_exact_search | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python |
title_exact_search_txtP | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python |
title_full | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python Franco Galeano, Manuel Ignacio |
title_fullStr | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python Franco Galeano, Manuel Ignacio |
title_full_unstemmed | Big Data Processing with Apache Spark Efficiently tackle large datasets and big data analysis with Spark and Python Franco Galeano, Manuel Ignacio |
title_short | Big Data Processing with Apache Spark |
title_sort | big data processing with apache spark efficiently tackle large datasets and big data analysis with spark and python |
title_sub | Efficiently tackle large datasets and big data analysis with Spark and Python |
topic | COMPUTERS / Data Processing COMPUTERS / Data Visualization |
topic_facet | COMPUTERS / Data Processing COMPUTERS / Data Visualization |
work_keys_str_mv | AT francogaleanomanuelignacio bigdataprocessingwithapachesparkefficientlytacklelargedatasetsandbigdataanalysiswithsparkandpython |