Home About Courses Schedule Services Webinars Contact Search

Introduction to Spark Programming

SEE SCHEDULE

Duration: 3 Days

Method: Instructor led, Hands-on workshops

Price: $1920.00

Course Code: AP3000


Audience

Developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner.

Description

The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. DataSets/DataFrames and Spark SQL). It includes in-depth coverage of Spark SQL, DataFrames, and DataSets, which are now the preferred programming API. This includes exploring possible performance issues and strategies for optimization. The course also covers more advanced capabilities such as the use of Spark Streaming to process streaming data and integrating with the Kafka server. Labs are supported in both Python and Scala

Objectives

Upon successful completion of this course, the student will be able to:

  • Understand the need for Spark in data processing
  • Understand the Spark architecture and how it distributes computations to cluster nodes
  • Be familiar with basic installation/setup / layout of Spark
  • Use the Spark for interactive and ad-hoc operations
  • Use Dataset/DataFrame/Spark SQL to efficiently process structured data
  • Understand basics of RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
  • Understand Spark's data caching and its usage
  • Understand performance implications and optimizations when using Spark
  • Be familiar with Spark Graph Processing and SparkML machine learning

Prerequisites

Students should have an introductory knowledge of Python or Scala. An overview of Scala is provided if needed.

Topics

  • I. Scala Ramp Up (Optional)
    • Scala Introduction, Variables, Data Types, Control Flow
    • The Scala Interpreter
    • Collections and their Standard Methods (e.g. map())
    • Functions, Methods, Function Literals
    • Class, Object, Trait
  • II. Introduction to Spark
    • Overview, Motivations, Spark Systemso Spark Ecosystem
    • Spark vs. Hadoop
    • Typical Spark Deployment and Usage Environments
  • III. RDDs and Spark Architecture
    • RDD Concepts, Partitions, Lifecycle, Lazy Evaluation
    • Working with RDDs - Creating and Transforming (map, filter, etc.)
    • Caching - Concepts, Storage Type, Guidelines
  • IV. DataSets/DataFrames and Spark SQL
    • Introduction and Usage
    • Creating and Using a DataSet
    • Working with JSON
    • Using the DataSet DSL
    • Using SQL with Spark
    • Data Formats
    • Optimizations: Catalyst and Tungsten
    • DataSets vs. DataFrames vs. RDDs
  • V. Creating Spark Applications
    • Overview, Basic Driver Code, SparkConf
    • Creating and Using a SparkContext/SparkSession
    • Building and Running Applications
    • Application Lifecycle
    • Cluster Managers
    • Logging and Debugging
  • VI. Spark Streaming
    • Overview and Streaming Basics
    • Structured Streaming
    • DStreams (Discretized Steams),
    • Architecture, Stateless, Stateful, and Windowed Transformations
    • Spark Streaming API
    • Programming and Transformations
  • VII. Performance Characteristics and Tuning
    • The Spark UI
    • Narrow vs. Wide Dependencies
    • Minimizing Data Processing and Shuffling
    • Caching - Concepts, Storage Type, Guidelines
    • Using Caching
    • Using Broadcast Variables and Accumulators
  • VIII. (Optional): Spark GraphX Overview
    • Introduction
    • Constructing Simple Graphs
    • GraphX API
    • Shortest Path Example
  • IX. (Optional): MLLib Overview
    • Introduction
    • Feature Vectors
    • Clustering / Grouping, K-Means
    • Recommendations
    • Classifications