Apache Spark with Scala – Hands-On with Big Data!

Preview this course

This is a comprehensive and practical Apache Spark course. In this course, you will learn and master the art of framing data analysis problems as Spark problems through 20+ hands-on examples, and then scale them up to run on cloud computing services. Explore Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API.

Unlimited access to 750+ courses.
Enjoy a Free Trial. Cancel Anytime.

- OR -

30-Day Money-Back Guarantee
Full Lifetime Access.
69 on-demand videos & exercises
Level: Beginner
English
8hrs 55mins
Access on mobile, web and TV

What to know about this course

“Big data” analysis is a hot and highly valuable skill—and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, eBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive datasets across a fault-tolerant Hadoop cluster. You will learn those same techniques using your own Windows system right at home. It is easier than you think, and you will learn from an ex-engineer and senior manager from Amazon and IMDb. In this course, you will learn the concepts of Spark’s Resilient Distributed Datasets, DataFrames, and datasets.

We will also cover a crash course in the Scala programming language that will help you with the course. You will learn how to develop and run Spark jobs quickly using Scala, IntelliJ, and SBT. You will learn how to translate complex analysis problems into iterative or multi-stage Spark scripts. You will learn how to scale up to larger datasets using Amazon’s Elastic MapReduce service and understand how Hadoop YARN distributes Spark across computing clusters. We will also be practicing using other Spark technologies, such as Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX. By the end of this course, you will be running code that analyzes gigabytes worth of information—in the cloud—in a matter of minutes. 

Who's this course for?

This course is designed for software engineers who want to expand their skills into the world of big data processing on a cluster. It is necessary to have some prior programming or scripting knowledge.

What you'll learn

  • Learn the concepts of Spark’s RDD, DataFrames, and Datasets
  • Get a crash course in the Scala programming language
  • Develop and run Spark jobs quickly using Scala, IntelliJ, and SBT
  • Translate complex analysis problems into iterative or multi-stage Spark scripts
  • Scale up to larger datasets using Amazon’s Elastic MapReduce service
  • Understand how Hadoop YARN distributes Spark across computing clusters

Key Features

  • Understand the fundamentals of Scala and the Apache Spark ecosystem.
  • Develop distributed code using the Scala programming language.
  • Include practical examples to help you develop real-world Big Data applications with Spark with Scala.

Course Curriculum

About the Author

Frank Kane

Frank Kane has spent nine years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers all the time. He holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology and teaches others about big data analysis.

40% OFF! Unlimited Access to 750+ Courses. Redeem Now.