Apache Spark 2.x for Java developers :: explore data at scale using the Java APIs of Apache Spark 2.x /
Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java About This Book Perform big data processing with Spark--without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham, UK :
Packt Publishing,
2017.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java About This Book Perform big data processing with Spark--without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark Who This Book Is For If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful. What You Will Learn Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library. Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark In Detail Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications. Style an ... |
Beschreibung: | 1 online resource (1 volume) : illustrations |
ISBN: | 178712942X 9781787129429 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-on1001253546 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr unu|||||||| | ||
008 | 170816s2017 enka o 000 0 eng d | ||
040 | |a UMI |b eng |e rda |e pn |c UMI |d IDEBK |d OCLCF |d TOH |d STF |d COO |d UOK |d CEF |d KSU |d NLE |d UKMGB |d UAB |d UKAHL |d N$T |d QGK |d OCLCQ |d OCLCO |d OCLCQ |d OCLCO |d OCLCL |d DXU | ||
015 | |a GBB7H2179 |2 bnb | ||
016 | 7 | |a 018470860 |2 Uk | |
020 | |a 178712942X | ||
020 | |a 9781787129429 |q (electronic bk.) | ||
020 | |z 9781787126497 | ||
035 | |a (OCoLC)1001253546 | ||
037 | |a CL0500000884 |b Safari Books Online | ||
050 | 4 | |a QA76.9.D343 | |
082 | 7 | |a 006.312 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Gulati, Sourav, |e author. | |
245 | 1 | 0 | |a Apache Spark 2.x for Java developers : |b explore data at scale using the Java APIs of Apache Spark 2.x / |c Sourav Gulati, Sumit Kumar. |
264 | 1 | |a Birmingham, UK : |b Packt Publishing, |c 2017. | |
300 | |a 1 online resource (1 volume) : |b illustrations | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
588 | 0 | |a Online resource; title from title page (Safari, viewed August 14, 2017). | |
520 | |a Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java About This Book Perform big data processing with Spark--without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark Who This Book Is For If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful. What You Will Learn Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library. Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark In Detail Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications. Style an ... | ||
505 | 0 | |a Cover -- Copyright -- Credits -- Foreword -- About the Authors -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Introduction to Spark -- Dimensions of big data -- What makes Hadoop so revolutionary? -- Defining HDFS -- NameNode -- HDFS I/O -- YARN -- Processing the flow of application submission in YARN -- Overview of MapReduce -- Why Apache Spark? -- RDD -- the first citizen of Spark -- Operations on RDD -- Lazy evaluation -- Benefits of RDD -- Exploring the Spark ecosystem -- What's new in Spark 2.X? -- References -- Summary -- Chapter 2: Revisiting Java -- Why use Java for Spark? -- Generics -- Creating your own generic type -- Interfaces -- Static method in an interface -- Default method in interface -- What if a class implements two interfaces which have default methods with same name and signature? -- Anonymous inner classes -- Lambda expressions -- Functional interface -- Syntax of Lambda expressions -- Lexical scoping -- Method reference -- Understanding closures -- Streams -- Generating streams -- Intermediate operations -- Working with intermediate operations -- Terminal operations -- Working with terminal operations -- String collectors -- Collection collectors -- Map collectors -- Groupings -- Partitioning -- Matching -- Finding elements -- Summary -- Chapter 3: Let Us Spark -- Getting started with Spark -- Spark REPL also known as CLI -- Some basic exercises using Spark shell -- Checking Spark version -- Creating and filtering RDD -- Word count on RDD -- Finding the sum of all even numbers in an RDD of integers -- Counting the number of words in a file -- Spark components -- Spark Driver Web UI -- Jobs -- Stages -- Storage -- Environment -- Executors -- SQL -- Streaming -- Spark job configuration and submission -- Spark REST APIs -- Summary. | |
505 | 8 | |a Chapter 4: Understanding the Spark Programming Model -- Hello Spark -- Prerequisites -- Common RDD transformations -- Map -- Filter -- flatMap -- mapToPair -- flatMapToPair -- union -- Intersection -- Distinct -- Cartesian -- groupByKey -- reduceByKey -- sortByKey -- Join -- CoGroup -- Common RDD actions -- isEmpty -- collect -- collectAsMap -- count -- countByKey -- countByValue -- Max -- Min -- First -- Take -- takeOrdered -- takeSample -- top -- reduce -- Fold -- aggregate -- forEach -- saveAsTextFile -- saveAsObjectFile -- RDD persistence and cache -- Summary -- Chapter 5: Working with Data and Storage -- Interaction with external storage systems -- Interaction with local filesystem -- Interaction with Amazon S3 -- Interaction with HDFS -- Interaction with Cassandra -- Working with different data formats -- Plain and specially formatted text -- Working with CSV data -- Working with JSON data -- Working with XML Data -- References -- Summary -- Chapter 6: Spark on Cluster -- Spark application in distributed-mode -- Driver program -- Executor program -- Cluster managers -- Spark standalone -- Installation of Spark standalone cluster -- Start master -- Start slave -- Stop master and slaves -- Deploying applications on Spark standalone cluster -- Client mode -- Cluster mode -- Useful job configurations -- Useful cluster level configurations (Spark standalone) -- Yet Another Resource Negotiator (YARN) -- YARN client -- YARN cluster -- Useful job configuration -- Summary -- Chapter 7: Spark Programming Model -- Advanced -- RDD partitioning -- Repartitioning -- How Spark calculates the partition count for transformations with shuffling (wide transformations ) -- Partitioner -- Hash Partitioner -- Range Partitioner -- Custom Partitioner -- Advanced transformations -- mapPartitions -- mapPartitionsWithIndex -- mapPartitionsToPair -- mapValues. | |
505 | 8 | |a FlatMapValues -- repartitionAndSortWithinPartitions -- coalesce -- foldByKey -- aggregateByKey -- combineByKey -- Advanced actions -- Approximate actions -- Asynchronous actions -- Miscellaneous actions -- Shared variable -- Broadcast variable -- Properties of the broadcast variable -- Lifecycle of a broadcast variable -- Map-side join using broadcast variable -- Accumulators -- Driver program -- Summary -- Chapter 8: Working with Spark SQL -- SQLContext and HiveContext -- Initializing SparkSession -- Reading CSV using SparkSession -- Dataframe and dataset -- SchemaRDD -- Dataframe -- Dataset -- Creating a dataset using encoders -- Creating a dataset using StructType -- Unified dataframe and dataset API -- Data persistence -- Spark SQL operations -- Untyped dataset operation -- Temporary view -- Global temporary view -- Spark UDF -- Spark UDAF -- Untyped UDAF -- Type-safe UDAF: -- Hive integration -- Table Persistence -- Summary -- Chapter 9: Near Real-Time Processing with Spark Streaming -- Introducing Spark Streaming -- Understanding micro batching -- Getting started with Spark Streaming jobs -- Streaming sources -- fileStream -- Kafka -- Streaming transformations -- Stateless transformation -- Stateful transformation -- Checkpointing -- Windowing -- Transform operation -- Fault tolerance and reliability -- Data receiver stage -- File streams -- Advanced streaming sources -- Transformation stage -- Output stage -- Structured Streaming -- Recap of the use case -- Structured streaming -- programming model -- Built-in input sources and sinks -- Input sources -- Built-in Sinks -- Summary -- Chapter 10: Machine Learning Analytics with Spark MLlib -- Introduction to machine learning -- Concepts of machine learning -- Datatypes -- Machine learning work flow -- Pipelines -- Operations on feature vectors -- Feature extractors -- Feature transformers. | |
505 | 8 | |a Feature selectors -- Summary -- Chapter 11: Learning Spark GraphX -- Introduction to GraphX -- Introduction to Property Graph -- Getting started with the GraphX API -- Using vertex and edge RDDs -- From edges -- EdgeTriplet -- Graph operations -- mapVertices -- mapEdges -- mapTriplets -- reverse -- subgraph -- aggregateMessages -- outerJoinVertices -- Graph algorithms -- PageRank -- Static PageRank -- Dynamic PageRank -- Triangle counting -- Connected components -- Summary -- Index. | |
630 | 0 | 0 | |a Spark (Electronic resource : Apache Software Foundation) |0 http://id.loc.gov/authorities/names/no2015027445 |
630 | 0 | 7 | |a Spark (Electronic resource : Apache Software Foundation) |2 fast |
650 | 0 | |a Application program interfaces (Computer software) |0 http://id.loc.gov/authorities/subjects/sh98004527 | |
650 | 0 | |a Java (Computer program language) |0 http://id.loc.gov/authorities/subjects/sh95008574 | |
650 | 6 | |a Interfaces de programmation d'applications. | |
650 | 6 | |a Java (Langage de programmation) | |
650 | 7 | |a APIs (interfaces) |2 aat | |
650 | 7 | |a COMPUTERS |x Data Processing. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Databases |x Data Mining. |2 bisacsh | |
650 | 7 | |a COMPUTERS |x Data Modeling & Design. |2 bisacsh | |
650 | 7 | |a Application program interfaces (Computer software) |2 fast | |
650 | 7 | |a Java (Computer program language) |2 fast | |
700 | 1 | |a Kumar, Sumit, |e author. | |
758 | |i has work: |a Apache Spark 2.x for Java Developers (Text) |1 https://id.oclc.org/worldcat/entity/E39PCYmB8PyhWmPQxjQmj3QMKd |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1562679 |3 Volltext |
938 | |a Askews and Holts Library Services |b ASKH |n AH31954928 | ||
938 | |a EBSCOhost |b EBSC |n 1562679 | ||
938 | |a ProQuest MyiLibrary Digital eBook Collection |b IDEB |n cis36498409 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-on1001253546 |
---|---|
_version_ | 1816882397850894336 |
adam_text | |
any_adam_object | |
author | Gulati, Sourav Kumar, Sumit |
author_facet | Gulati, Sourav Kumar, Sumit |
author_role | aut aut |
author_sort | Gulati, Sourav |
author_variant | s g sg s k sk |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.9.D343 |
callnumber-search | QA76.9.D343 |
callnumber-sort | QA 276.9 D343 |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBA |
contents | Cover -- Copyright -- Credits -- Foreword -- About the Authors -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Introduction to Spark -- Dimensions of big data -- What makes Hadoop so revolutionary? -- Defining HDFS -- NameNode -- HDFS I/O -- YARN -- Processing the flow of application submission in YARN -- Overview of MapReduce -- Why Apache Spark? -- RDD -- the first citizen of Spark -- Operations on RDD -- Lazy evaluation -- Benefits of RDD -- Exploring the Spark ecosystem -- What's new in Spark 2.X? -- References -- Summary -- Chapter 2: Revisiting Java -- Why use Java for Spark? -- Generics -- Creating your own generic type -- Interfaces -- Static method in an interface -- Default method in interface -- What if a class implements two interfaces which have default methods with same name and signature? -- Anonymous inner classes -- Lambda expressions -- Functional interface -- Syntax of Lambda expressions -- Lexical scoping -- Method reference -- Understanding closures -- Streams -- Generating streams -- Intermediate operations -- Working with intermediate operations -- Terminal operations -- Working with terminal operations -- String collectors -- Collection collectors -- Map collectors -- Groupings -- Partitioning -- Matching -- Finding elements -- Summary -- Chapter 3: Let Us Spark -- Getting started with Spark -- Spark REPL also known as CLI -- Some basic exercises using Spark shell -- Checking Spark version -- Creating and filtering RDD -- Word count on RDD -- Finding the sum of all even numbers in an RDD of integers -- Counting the number of words in a file -- Spark components -- Spark Driver Web UI -- Jobs -- Stages -- Storage -- Environment -- Executors -- SQL -- Streaming -- Spark job configuration and submission -- Spark REST APIs -- Summary. Chapter 4: Understanding the Spark Programming Model -- Hello Spark -- Prerequisites -- Common RDD transformations -- Map -- Filter -- flatMap -- mapToPair -- flatMapToPair -- union -- Intersection -- Distinct -- Cartesian -- groupByKey -- reduceByKey -- sortByKey -- Join -- CoGroup -- Common RDD actions -- isEmpty -- collect -- collectAsMap -- count -- countByKey -- countByValue -- Max -- Min -- First -- Take -- takeOrdered -- takeSample -- top -- reduce -- Fold -- aggregate -- forEach -- saveAsTextFile -- saveAsObjectFile -- RDD persistence and cache -- Summary -- Chapter 5: Working with Data and Storage -- Interaction with external storage systems -- Interaction with local filesystem -- Interaction with Amazon S3 -- Interaction with HDFS -- Interaction with Cassandra -- Working with different data formats -- Plain and specially formatted text -- Working with CSV data -- Working with JSON data -- Working with XML Data -- References -- Summary -- Chapter 6: Spark on Cluster -- Spark application in distributed-mode -- Driver program -- Executor program -- Cluster managers -- Spark standalone -- Installation of Spark standalone cluster -- Start master -- Start slave -- Stop master and slaves -- Deploying applications on Spark standalone cluster -- Client mode -- Cluster mode -- Useful job configurations -- Useful cluster level configurations (Spark standalone) -- Yet Another Resource Negotiator (YARN) -- YARN client -- YARN cluster -- Useful job configuration -- Summary -- Chapter 7: Spark Programming Model -- Advanced -- RDD partitioning -- Repartitioning -- How Spark calculates the partition count for transformations with shuffling (wide transformations ) -- Partitioner -- Hash Partitioner -- Range Partitioner -- Custom Partitioner -- Advanced transformations -- mapPartitions -- mapPartitionsWithIndex -- mapPartitionsToPair -- mapValues. FlatMapValues -- repartitionAndSortWithinPartitions -- coalesce -- foldByKey -- aggregateByKey -- combineByKey -- Advanced actions -- Approximate actions -- Asynchronous actions -- Miscellaneous actions -- Shared variable -- Broadcast variable -- Properties of the broadcast variable -- Lifecycle of a broadcast variable -- Map-side join using broadcast variable -- Accumulators -- Driver program -- Summary -- Chapter 8: Working with Spark SQL -- SQLContext and HiveContext -- Initializing SparkSession -- Reading CSV using SparkSession -- Dataframe and dataset -- SchemaRDD -- Dataframe -- Dataset -- Creating a dataset using encoders -- Creating a dataset using StructType -- Unified dataframe and dataset API -- Data persistence -- Spark SQL operations -- Untyped dataset operation -- Temporary view -- Global temporary view -- Spark UDF -- Spark UDAF -- Untyped UDAF -- Type-safe UDAF: -- Hive integration -- Table Persistence -- Summary -- Chapter 9: Near Real-Time Processing with Spark Streaming -- Introducing Spark Streaming -- Understanding micro batching -- Getting started with Spark Streaming jobs -- Streaming sources -- fileStream -- Kafka -- Streaming transformations -- Stateless transformation -- Stateful transformation -- Checkpointing -- Windowing -- Transform operation -- Fault tolerance and reliability -- Data receiver stage -- File streams -- Advanced streaming sources -- Transformation stage -- Output stage -- Structured Streaming -- Recap of the use case -- Structured streaming -- programming model -- Built-in input sources and sinks -- Input sources -- Built-in Sinks -- Summary -- Chapter 10: Machine Learning Analytics with Spark MLlib -- Introduction to machine learning -- Concepts of machine learning -- Datatypes -- Machine learning work flow -- Pipelines -- Operations on feature vectors -- Feature extractors -- Feature transformers. Feature selectors -- Summary -- Chapter 11: Learning Spark GraphX -- Introduction to GraphX -- Introduction to Property Graph -- Getting started with the GraphX API -- Using vertex and edge RDDs -- From edges -- EdgeTriplet -- Graph operations -- mapVertices -- mapEdges -- mapTriplets -- reverse -- subgraph -- aggregateMessages -- outerJoinVertices -- Graph algorithms -- PageRank -- Static PageRank -- Dynamic PageRank -- Triangle counting -- Connected components -- Summary -- Index. |
ctrlnum | (OCoLC)1001253546 |
dewey-full | 006.312 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.312 |
dewey-search | 006.312 |
dewey-sort | 16.312 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>11273cam a2200613 i 4500</leader><controlfield tag="001">ZDB-4-EBA-on1001253546</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr unu||||||||</controlfield><controlfield tag="008">170816s2017 enka o 000 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">UMI</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">UMI</subfield><subfield code="d">IDEBK</subfield><subfield code="d">OCLCF</subfield><subfield code="d">TOH</subfield><subfield code="d">STF</subfield><subfield code="d">COO</subfield><subfield code="d">UOK</subfield><subfield code="d">CEF</subfield><subfield code="d">KSU</subfield><subfield code="d">NLE</subfield><subfield code="d">UKMGB</subfield><subfield code="d">UAB</subfield><subfield code="d">UKAHL</subfield><subfield code="d">N$T</subfield><subfield code="d">QGK</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">DXU</subfield></datafield><datafield tag="015" ind1=" " ind2=" "><subfield code="a">GBB7H2179</subfield><subfield code="2">bnb</subfield></datafield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">018470860</subfield><subfield code="2">Uk</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">178712942X</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781787129429</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781787126497</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1001253546</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">CL0500000884</subfield><subfield code="b">Safari Books Online</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.9.D343</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.312</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gulati, Sourav,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Apache Spark 2.x for Java developers :</subfield><subfield code="b">explore data at scale using the Java APIs of Apache Spark 2.x /</subfield><subfield code="c">Sourav Gulati, Sumit Kumar.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham, UK :</subfield><subfield code="b">Packt Publishing,</subfield><subfield code="c">2017.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (1 volume) :</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Online resource; title from title page (Safari, viewed August 14, 2017).</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java About This Book Perform big data processing with Spark--without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark Who This Book Is For If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful. What You Will Learn Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library. Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark In Detail Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications. Style an ...</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Copyright -- Credits -- Foreword -- About the Authors -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Introduction to Spark -- Dimensions of big data -- What makes Hadoop so revolutionary? -- Defining HDFS -- NameNode -- HDFS I/O -- YARN -- Processing the flow of application submission in YARN -- Overview of MapReduce -- Why Apache Spark? -- RDD -- the first citizen of Spark -- Operations on RDD -- Lazy evaluation -- Benefits of RDD -- Exploring the Spark ecosystem -- What's new in Spark 2.X? -- References -- Summary -- Chapter 2: Revisiting Java -- Why use Java for Spark? -- Generics -- Creating your own generic type -- Interfaces -- Static method in an interface -- Default method in interface -- What if a class implements two interfaces which have default methods with same name and signature? -- Anonymous inner classes -- Lambda expressions -- Functional interface -- Syntax of Lambda expressions -- Lexical scoping -- Method reference -- Understanding closures -- Streams -- Generating streams -- Intermediate operations -- Working with intermediate operations -- Terminal operations -- Working with terminal operations -- String collectors -- Collection collectors -- Map collectors -- Groupings -- Partitioning -- Matching -- Finding elements -- Summary -- Chapter 3: Let Us Spark -- Getting started with Spark -- Spark REPL also known as CLI -- Some basic exercises using Spark shell -- Checking Spark version -- Creating and filtering RDD -- Word count on RDD -- Finding the sum of all even numbers in an RDD of integers -- Counting the number of words in a file -- Spark components -- Spark Driver Web UI -- Jobs -- Stages -- Storage -- Environment -- Executors -- SQL -- Streaming -- Spark job configuration and submission -- Spark REST APIs -- Summary.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Chapter 4: Understanding the Spark Programming Model -- Hello Spark -- Prerequisites -- Common RDD transformations -- Map -- Filter -- flatMap -- mapToPair -- flatMapToPair -- union -- Intersection -- Distinct -- Cartesian -- groupByKey -- reduceByKey -- sortByKey -- Join -- CoGroup -- Common RDD actions -- isEmpty -- collect -- collectAsMap -- count -- countByKey -- countByValue -- Max -- Min -- First -- Take -- takeOrdered -- takeSample -- top -- reduce -- Fold -- aggregate -- forEach -- saveAsTextFile -- saveAsObjectFile -- RDD persistence and cache -- Summary -- Chapter 5: Working with Data and Storage -- Interaction with external storage systems -- Interaction with local filesystem -- Interaction with Amazon S3 -- Interaction with HDFS -- Interaction with Cassandra -- Working with different data formats -- Plain and specially formatted text -- Working with CSV data -- Working with JSON data -- Working with XML Data -- References -- Summary -- Chapter 6: Spark on Cluster -- Spark application in distributed-mode -- Driver program -- Executor program -- Cluster managers -- Spark standalone -- Installation of Spark standalone cluster -- Start master -- Start slave -- Stop master and slaves -- Deploying applications on Spark standalone cluster -- Client mode -- Cluster mode -- Useful job configurations -- Useful cluster level configurations (Spark standalone) -- Yet Another Resource Negotiator (YARN) -- YARN client -- YARN cluster -- Useful job configuration -- Summary -- Chapter 7: Spark Programming Model -- Advanced -- RDD partitioning -- Repartitioning -- How Spark calculates the partition count for transformations with shuffling (wide transformations ) -- Partitioner -- Hash Partitioner -- Range Partitioner -- Custom Partitioner -- Advanced transformations -- mapPartitions -- mapPartitionsWithIndex -- mapPartitionsToPair -- mapValues.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">FlatMapValues -- repartitionAndSortWithinPartitions -- coalesce -- foldByKey -- aggregateByKey -- combineByKey -- Advanced actions -- Approximate actions -- Asynchronous actions -- Miscellaneous actions -- Shared variable -- Broadcast variable -- Properties of the broadcast variable -- Lifecycle of a broadcast variable -- Map-side join using broadcast variable -- Accumulators -- Driver program -- Summary -- Chapter 8: Working with Spark SQL -- SQLContext and HiveContext -- Initializing SparkSession -- Reading CSV using SparkSession -- Dataframe and dataset -- SchemaRDD -- Dataframe -- Dataset -- Creating a dataset using encoders -- Creating a dataset using StructType -- Unified dataframe and dataset API -- Data persistence -- Spark SQL operations -- Untyped dataset operation -- Temporary view -- Global temporary view -- Spark UDF -- Spark UDAF -- Untyped UDAF -- Type-safe UDAF: -- Hive integration -- Table Persistence -- Summary -- Chapter 9: Near Real-Time Processing with Spark Streaming -- Introducing Spark Streaming -- Understanding micro batching -- Getting started with Spark Streaming jobs -- Streaming sources -- fileStream -- Kafka -- Streaming transformations -- Stateless transformation -- Stateful transformation -- Checkpointing -- Windowing -- Transform operation -- Fault tolerance and reliability -- Data receiver stage -- File streams -- Advanced streaming sources -- Transformation stage -- Output stage -- Structured Streaming -- Recap of the use case -- Structured streaming -- programming model -- Built-in input sources and sinks -- Input sources -- Built-in Sinks -- Summary -- Chapter 10: Machine Learning Analytics with Spark MLlib -- Introduction to machine learning -- Concepts of machine learning -- Datatypes -- Machine learning work flow -- Pipelines -- Operations on feature vectors -- Feature extractors -- Feature transformers.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Feature selectors -- Summary -- Chapter 11: Learning Spark GraphX -- Introduction to GraphX -- Introduction to Property Graph -- Getting started with the GraphX API -- Using vertex and edge RDDs -- From edges -- EdgeTriplet -- Graph operations -- mapVertices -- mapEdges -- mapTriplets -- reverse -- subgraph -- aggregateMessages -- outerJoinVertices -- Graph algorithms -- PageRank -- Static PageRank -- Dynamic PageRank -- Triangle counting -- Connected components -- Summary -- Index.</subfield></datafield><datafield tag="630" ind1="0" ind2="0"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="0">http://id.loc.gov/authorities/names/no2015027445</subfield></datafield><datafield tag="630" ind1="0" ind2="7"><subfield code="a">Spark (Electronic resource : Apache Software Foundation)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Application program interfaces (Computer software)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh98004527</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Java (Computer program language)</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh95008574</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Interfaces de programmation d'applications.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Java (Langage de programmation)</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">APIs (interfaces)</subfield><subfield code="2">aat</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Data Processing.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Databases</subfield><subfield code="x">Data Mining.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">COMPUTERS</subfield><subfield code="x">Data Modeling & Design.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Application program interfaces (Computer software)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Java (Computer program language)</subfield><subfield code="2">fast</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kumar, Sumit,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">Apache Spark 2.x for Java Developers (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCYmB8PyhWmPQxjQmj3QMKd</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1562679</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH31954928</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1562679</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest MyiLibrary Digital eBook Collection</subfield><subfield code="b">IDEB</subfield><subfield code="n">cis36498409</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-on1001253546 |
illustrated | Illustrated |
indexdate | 2024-11-27T13:27:58Z |
institution | BVB |
isbn | 178712942X 9781787129429 |
language | English |
oclc_num | 1001253546 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (1 volume) : illustrations |
psigel | ZDB-4-EBA |
publishDate | 2017 |
publishDateSearch | 2017 |
publishDateSort | 2017 |
publisher | Packt Publishing, |
record_format | marc |
spelling | Gulati, Sourav, author. Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / Sourav Gulati, Sumit Kumar. Birmingham, UK : Packt Publishing, 2017. 1 online resource (1 volume) : illustrations text txt rdacontent computer c rdamedia online resource cr rdacarrier Online resource; title from title page (Safari, viewed August 14, 2017). Unleash the data processing and analytics capability of Apache Spark with the language of choice: Java About This Book Perform big data processing with Spark--without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark Who This Book Is For If you are a Java developer interested in learning to use the popular Apache Spark framework, this book is the resource you need to get started. Apache Spark developers who are looking to build enterprise-grade applications in Java will also find this book very useful. What You Will Learn Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library. Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL Library Explore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problems Get to know Spark GraphX so you understand various graph-based analytics that can be performed with Spark In Detail Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications. Style an ... Cover -- Copyright -- Credits -- Foreword -- About the Authors -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Introduction to Spark -- Dimensions of big data -- What makes Hadoop so revolutionary? -- Defining HDFS -- NameNode -- HDFS I/O -- YARN -- Processing the flow of application submission in YARN -- Overview of MapReduce -- Why Apache Spark? -- RDD -- the first citizen of Spark -- Operations on RDD -- Lazy evaluation -- Benefits of RDD -- Exploring the Spark ecosystem -- What's new in Spark 2.X? -- References -- Summary -- Chapter 2: Revisiting Java -- Why use Java for Spark? -- Generics -- Creating your own generic type -- Interfaces -- Static method in an interface -- Default method in interface -- What if a class implements two interfaces which have default methods with same name and signature? -- Anonymous inner classes -- Lambda expressions -- Functional interface -- Syntax of Lambda expressions -- Lexical scoping -- Method reference -- Understanding closures -- Streams -- Generating streams -- Intermediate operations -- Working with intermediate operations -- Terminal operations -- Working with terminal operations -- String collectors -- Collection collectors -- Map collectors -- Groupings -- Partitioning -- Matching -- Finding elements -- Summary -- Chapter 3: Let Us Spark -- Getting started with Spark -- Spark REPL also known as CLI -- Some basic exercises using Spark shell -- Checking Spark version -- Creating and filtering RDD -- Word count on RDD -- Finding the sum of all even numbers in an RDD of integers -- Counting the number of words in a file -- Spark components -- Spark Driver Web UI -- Jobs -- Stages -- Storage -- Environment -- Executors -- SQL -- Streaming -- Spark job configuration and submission -- Spark REST APIs -- Summary. Chapter 4: Understanding the Spark Programming Model -- Hello Spark -- Prerequisites -- Common RDD transformations -- Map -- Filter -- flatMap -- mapToPair -- flatMapToPair -- union -- Intersection -- Distinct -- Cartesian -- groupByKey -- reduceByKey -- sortByKey -- Join -- CoGroup -- Common RDD actions -- isEmpty -- collect -- collectAsMap -- count -- countByKey -- countByValue -- Max -- Min -- First -- Take -- takeOrdered -- takeSample -- top -- reduce -- Fold -- aggregate -- forEach -- saveAsTextFile -- saveAsObjectFile -- RDD persistence and cache -- Summary -- Chapter 5: Working with Data and Storage -- Interaction with external storage systems -- Interaction with local filesystem -- Interaction with Amazon S3 -- Interaction with HDFS -- Interaction with Cassandra -- Working with different data formats -- Plain and specially formatted text -- Working with CSV data -- Working with JSON data -- Working with XML Data -- References -- Summary -- Chapter 6: Spark on Cluster -- Spark application in distributed-mode -- Driver program -- Executor program -- Cluster managers -- Spark standalone -- Installation of Spark standalone cluster -- Start master -- Start slave -- Stop master and slaves -- Deploying applications on Spark standalone cluster -- Client mode -- Cluster mode -- Useful job configurations -- Useful cluster level configurations (Spark standalone) -- Yet Another Resource Negotiator (YARN) -- YARN client -- YARN cluster -- Useful job configuration -- Summary -- Chapter 7: Spark Programming Model -- Advanced -- RDD partitioning -- Repartitioning -- How Spark calculates the partition count for transformations with shuffling (wide transformations ) -- Partitioner -- Hash Partitioner -- Range Partitioner -- Custom Partitioner -- Advanced transformations -- mapPartitions -- mapPartitionsWithIndex -- mapPartitionsToPair -- mapValues. FlatMapValues -- repartitionAndSortWithinPartitions -- coalesce -- foldByKey -- aggregateByKey -- combineByKey -- Advanced actions -- Approximate actions -- Asynchronous actions -- Miscellaneous actions -- Shared variable -- Broadcast variable -- Properties of the broadcast variable -- Lifecycle of a broadcast variable -- Map-side join using broadcast variable -- Accumulators -- Driver program -- Summary -- Chapter 8: Working with Spark SQL -- SQLContext and HiveContext -- Initializing SparkSession -- Reading CSV using SparkSession -- Dataframe and dataset -- SchemaRDD -- Dataframe -- Dataset -- Creating a dataset using encoders -- Creating a dataset using StructType -- Unified dataframe and dataset API -- Data persistence -- Spark SQL operations -- Untyped dataset operation -- Temporary view -- Global temporary view -- Spark UDF -- Spark UDAF -- Untyped UDAF -- Type-safe UDAF: -- Hive integration -- Table Persistence -- Summary -- Chapter 9: Near Real-Time Processing with Spark Streaming -- Introducing Spark Streaming -- Understanding micro batching -- Getting started with Spark Streaming jobs -- Streaming sources -- fileStream -- Kafka -- Streaming transformations -- Stateless transformation -- Stateful transformation -- Checkpointing -- Windowing -- Transform operation -- Fault tolerance and reliability -- Data receiver stage -- File streams -- Advanced streaming sources -- Transformation stage -- Output stage -- Structured Streaming -- Recap of the use case -- Structured streaming -- programming model -- Built-in input sources and sinks -- Input sources -- Built-in Sinks -- Summary -- Chapter 10: Machine Learning Analytics with Spark MLlib -- Introduction to machine learning -- Concepts of machine learning -- Datatypes -- Machine learning work flow -- Pipelines -- Operations on feature vectors -- Feature extractors -- Feature transformers. Feature selectors -- Summary -- Chapter 11: Learning Spark GraphX -- Introduction to GraphX -- Introduction to Property Graph -- Getting started with the GraphX API -- Using vertex and edge RDDs -- From edges -- EdgeTriplet -- Graph operations -- mapVertices -- mapEdges -- mapTriplets -- reverse -- subgraph -- aggregateMessages -- outerJoinVertices -- Graph algorithms -- PageRank -- Static PageRank -- Dynamic PageRank -- Triangle counting -- Connected components -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Application program interfaces (Computer software) http://id.loc.gov/authorities/subjects/sh98004527 Java (Computer program language) http://id.loc.gov/authorities/subjects/sh95008574 Interfaces de programmation d'applications. Java (Langage de programmation) APIs (interfaces) aat COMPUTERS Data Processing. bisacsh COMPUTERS Databases Data Mining. bisacsh COMPUTERS Data Modeling & Design. bisacsh Application program interfaces (Computer software) fast Java (Computer program language) fast Kumar, Sumit, author. has work: Apache Spark 2.x for Java Developers (Text) https://id.oclc.org/worldcat/entity/E39PCYmB8PyhWmPQxjQmj3QMKd https://id.oclc.org/worldcat/ontology/hasWork FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1562679 Volltext |
spellingShingle | Gulati, Sourav Kumar, Sumit Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / Cover -- Copyright -- Credits -- Foreword -- About the Authors -- About the Reviewer -- www.PacktPub.com -- Customer Feedback -- Table of Contents -- Preface -- Chapter 1: Introduction to Spark -- Dimensions of big data -- What makes Hadoop so revolutionary? -- Defining HDFS -- NameNode -- HDFS I/O -- YARN -- Processing the flow of application submission in YARN -- Overview of MapReduce -- Why Apache Spark? -- RDD -- the first citizen of Spark -- Operations on RDD -- Lazy evaluation -- Benefits of RDD -- Exploring the Spark ecosystem -- What's new in Spark 2.X? -- References -- Summary -- Chapter 2: Revisiting Java -- Why use Java for Spark? -- Generics -- Creating your own generic type -- Interfaces -- Static method in an interface -- Default method in interface -- What if a class implements two interfaces which have default methods with same name and signature? -- Anonymous inner classes -- Lambda expressions -- Functional interface -- Syntax of Lambda expressions -- Lexical scoping -- Method reference -- Understanding closures -- Streams -- Generating streams -- Intermediate operations -- Working with intermediate operations -- Terminal operations -- Working with terminal operations -- String collectors -- Collection collectors -- Map collectors -- Groupings -- Partitioning -- Matching -- Finding elements -- Summary -- Chapter 3: Let Us Spark -- Getting started with Spark -- Spark REPL also known as CLI -- Some basic exercises using Spark shell -- Checking Spark version -- Creating and filtering RDD -- Word count on RDD -- Finding the sum of all even numbers in an RDD of integers -- Counting the number of words in a file -- Spark components -- Spark Driver Web UI -- Jobs -- Stages -- Storage -- Environment -- Executors -- SQL -- Streaming -- Spark job configuration and submission -- Spark REST APIs -- Summary. Chapter 4: Understanding the Spark Programming Model -- Hello Spark -- Prerequisites -- Common RDD transformations -- Map -- Filter -- flatMap -- mapToPair -- flatMapToPair -- union -- Intersection -- Distinct -- Cartesian -- groupByKey -- reduceByKey -- sortByKey -- Join -- CoGroup -- Common RDD actions -- isEmpty -- collect -- collectAsMap -- count -- countByKey -- countByValue -- Max -- Min -- First -- Take -- takeOrdered -- takeSample -- top -- reduce -- Fold -- aggregate -- forEach -- saveAsTextFile -- saveAsObjectFile -- RDD persistence and cache -- Summary -- Chapter 5: Working with Data and Storage -- Interaction with external storage systems -- Interaction with local filesystem -- Interaction with Amazon S3 -- Interaction with HDFS -- Interaction with Cassandra -- Working with different data formats -- Plain and specially formatted text -- Working with CSV data -- Working with JSON data -- Working with XML Data -- References -- Summary -- Chapter 6: Spark on Cluster -- Spark application in distributed-mode -- Driver program -- Executor program -- Cluster managers -- Spark standalone -- Installation of Spark standalone cluster -- Start master -- Start slave -- Stop master and slaves -- Deploying applications on Spark standalone cluster -- Client mode -- Cluster mode -- Useful job configurations -- Useful cluster level configurations (Spark standalone) -- Yet Another Resource Negotiator (YARN) -- YARN client -- YARN cluster -- Useful job configuration -- Summary -- Chapter 7: Spark Programming Model -- Advanced -- RDD partitioning -- Repartitioning -- How Spark calculates the partition count for transformations with shuffling (wide transformations ) -- Partitioner -- Hash Partitioner -- Range Partitioner -- Custom Partitioner -- Advanced transformations -- mapPartitions -- mapPartitionsWithIndex -- mapPartitionsToPair -- mapValues. FlatMapValues -- repartitionAndSortWithinPartitions -- coalesce -- foldByKey -- aggregateByKey -- combineByKey -- Advanced actions -- Approximate actions -- Asynchronous actions -- Miscellaneous actions -- Shared variable -- Broadcast variable -- Properties of the broadcast variable -- Lifecycle of a broadcast variable -- Map-side join using broadcast variable -- Accumulators -- Driver program -- Summary -- Chapter 8: Working with Spark SQL -- SQLContext and HiveContext -- Initializing SparkSession -- Reading CSV using SparkSession -- Dataframe and dataset -- SchemaRDD -- Dataframe -- Dataset -- Creating a dataset using encoders -- Creating a dataset using StructType -- Unified dataframe and dataset API -- Data persistence -- Spark SQL operations -- Untyped dataset operation -- Temporary view -- Global temporary view -- Spark UDF -- Spark UDAF -- Untyped UDAF -- Type-safe UDAF: -- Hive integration -- Table Persistence -- Summary -- Chapter 9: Near Real-Time Processing with Spark Streaming -- Introducing Spark Streaming -- Understanding micro batching -- Getting started with Spark Streaming jobs -- Streaming sources -- fileStream -- Kafka -- Streaming transformations -- Stateless transformation -- Stateful transformation -- Checkpointing -- Windowing -- Transform operation -- Fault tolerance and reliability -- Data receiver stage -- File streams -- Advanced streaming sources -- Transformation stage -- Output stage -- Structured Streaming -- Recap of the use case -- Structured streaming -- programming model -- Built-in input sources and sinks -- Input sources -- Built-in Sinks -- Summary -- Chapter 10: Machine Learning Analytics with Spark MLlib -- Introduction to machine learning -- Concepts of machine learning -- Datatypes -- Machine learning work flow -- Pipelines -- Operations on feature vectors -- Feature extractors -- Feature transformers. Feature selectors -- Summary -- Chapter 11: Learning Spark GraphX -- Introduction to GraphX -- Introduction to Property Graph -- Getting started with the GraphX API -- Using vertex and edge RDDs -- From edges -- EdgeTriplet -- Graph operations -- mapVertices -- mapEdges -- mapTriplets -- reverse -- subgraph -- aggregateMessages -- outerJoinVertices -- Graph algorithms -- PageRank -- Static PageRank -- Dynamic PageRank -- Triangle counting -- Connected components -- Summary -- Index. Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Application program interfaces (Computer software) http://id.loc.gov/authorities/subjects/sh98004527 Java (Computer program language) http://id.loc.gov/authorities/subjects/sh95008574 Interfaces de programmation d'applications. Java (Langage de programmation) APIs (interfaces) aat COMPUTERS Data Processing. bisacsh COMPUTERS Databases Data Mining. bisacsh COMPUTERS Data Modeling & Design. bisacsh Application program interfaces (Computer software) fast Java (Computer program language) fast |
subject_GND | http://id.loc.gov/authorities/names/no2015027445 http://id.loc.gov/authorities/subjects/sh98004527 http://id.loc.gov/authorities/subjects/sh95008574 |
title | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / |
title_auth | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / |
title_exact_search | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / |
title_full | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / Sourav Gulati, Sumit Kumar. |
title_fullStr | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / Sourav Gulati, Sumit Kumar. |
title_full_unstemmed | Apache Spark 2.x for Java developers : explore data at scale using the Java APIs of Apache Spark 2.x / Sourav Gulati, Sumit Kumar. |
title_short | Apache Spark 2.x for Java developers : |
title_sort | apache spark 2 x for java developers explore data at scale using the java apis of apache spark 2 x |
title_sub | explore data at scale using the Java APIs of Apache Spark 2.x / |
topic | Spark (Electronic resource : Apache Software Foundation) http://id.loc.gov/authorities/names/no2015027445 Spark (Electronic resource : Apache Software Foundation) fast Application program interfaces (Computer software) http://id.loc.gov/authorities/subjects/sh98004527 Java (Computer program language) http://id.loc.gov/authorities/subjects/sh95008574 Interfaces de programmation d'applications. Java (Langage de programmation) APIs (interfaces) aat COMPUTERS Data Processing. bisacsh COMPUTERS Databases Data Mining. bisacsh COMPUTERS Data Modeling & Design. bisacsh Application program interfaces (Computer software) fast Java (Computer program language) fast |
topic_facet | Spark (Electronic resource : Apache Software Foundation) Application program interfaces (Computer software) Java (Computer program language) Interfaces de programmation d'applications. Java (Langage de programmation) APIs (interfaces) COMPUTERS Data Processing. COMPUTERS Databases Data Mining. COMPUTERS Data Modeling & Design. |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1562679 |
work_keys_str_mv | AT gulatisourav apachespark2xforjavadevelopersexploredataatscaleusingthejavaapisofapachespark2x AT kumarsumit apachespark2xforjavadevelopersexploredataatscaleusingthejavaapisofapachespark2x |