cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Parallel kafka consumer in spark structured streaming

subham0611
New Contributor II

Hi,

I have a spark streaming job which reads from kafka and process data and write to delta lake.

Number of kafka partition: 100

number of executor: 2 (4 core each)

So we have 8 cores total which are reading from 100 partitions of a topic. I wanted to understand if spark internally spin up muliple threads to reads from multiple partitions in parallel? if not is there any way to spin up multiple threads for kafka consumer.

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @subham0611In Spark Streaming, the number of threads is not explicitly controlled by the user. Instead, the parallelism is determined by the number of partitions in the Kafka topic. Each partition is consumed by a single Spark task.

When you have a Spark Streaming job that reads from Kafka, it creates one Kafka Consumer per partition. If your Kafka topic has 100 partitions, Spark will create 100 tasks (one for each partition) to consume the data. These tasks are distributed across the available cores/executors in your Spark cluster.

In your case, you have 2 executors with 4 cores each, giving you a total of 8 cores. These cores are used to run the tasks. If there are more tasks than cores (as in your case with 100 tasks and 8 cores), the tasks will be queued and executed in sequence on the available cores.

So, in essence, Spark reads from multiple Kafka partitions in parallel, but the level of parallelism is constrained by the number of available cores. You donโ€™t need to manually spin up multiple threads for the Kafka consumer, as Spark handles this for you.

If you want to increase the level of parallelism, you can increase the number of cores or executors in your Spark cluster.

I hope this helps! Let me know if you have any other questions. ๐Ÿ˜Š

 
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!