Real-Time Stream Processing Using Apache Spark 3 for Python Developers

Preview this course

Get to grips with real-time stream processing using PySpark as well as Spark structured streaming and apply that knowledge to build stream processing solutions. This course is example-driven and follows a working session-like approach.

Unlimited access to 750+ courses.
Enjoy a Free Trial. Cancel Anytime.

- OR -

30-Day Money-Back Guarantee
Full Lifetime Access.
41 on-demand videos & exercises
Level: Beginner 
English
4hrs 34mins
Access on mobile, web and TV

What to know about this course

Take your first steps towards discovering, learning, and using Apache Spark 3.0. We will be taking a live coding approach in this carefully structured course and explaining all the core concepts needed along the way.  In this course, we will understand the real-time stream processing concepts, Spark structured streaming APIs, and architecture. We will work with file streams, Kafka source, and integrating Spark with Kafka. Next, we will learn about state-less and state-full streaming transformations. Then cover windowing aggregates using Spark stream. Next, we will cover watermarking and state cleanup. After that, we will cover streaming joins and aggregation, handling memory problems with streaming joins. Finally, learn to create arbitrary streaming sinks. 

By the end of this course, you will be able to create real-time stream processing applications using Apache Spark.  All the resources for the course are available at https://github.com/PacktPublishing/Real-time-stream-processing-using-Apache-Spark-3-for-Python-developers

Who's this course for?

This course is designed for software engineers and architects who are willing to design and develop big data engineering projects using Apache Spark.
It is also designed for programmers and developers who are aspiring to grow and learn data engineering using Apache Spark. 
For this course, you need to know Spark fundamentals and should be exposed to Spark Dataframe APIs. Also, you should know Kafka fundamentals and have a working knowledge of Apache Kafka. One should also have programming knowledge of Python programming.

What you'll learn

  • Explore state-less and state-full streaming transformations.
  • Windowing aggregates using Spark stream.
  • Learn Watermarking and state cleanup.
  • Implement streaming joins and aggregations.
  • Handling memory problems with streaming joins.
  • Learn to create arbitrary streaming sinks.

Key Features

  • Learn real-time stream processing concepts.
  • Understand Spark structured streaming APIs and architecture.
  • Work with file streams, Kafka source, and integrating Spark with Kafka.

Course Curriculum

About the Author

Scholar Nest

Scholar Nest is a small team of people passionate about helping others learn and grow in their careers by bridging the gap between their existing and required skills. Together, they have over 40+ years of experience in IT as a developer, architect, consultant, trainer, and mentor. They have worked with international software services organizations on various data-centric and Big Data projects. It is a team of firm believers in lifelong continuous learning and skill development. To popularize the importance of continuous learning, they started publishing free training videos on their YouTube channel. They conceptualized the notion of continuous learning, creating a journal of our learning under the Learning Journal banner.