cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Parallel kafka consumer in spark structured streaming

subham0611
New Contributor II

Hi,

I have a spark streaming job which reads from kafka and process data and write to delta lake.

Number of kafka partition: 100

number of executor: 2 (4 core each)

So we have 8 cores total which are reading from 100 partitions of a topic. I wanted to understand if spark internally spin up muliple threads to reads from multiple partitions in parallel? if not is there any way to spin up multiple threads for kafka consumer.

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @subham0611In Spark Streaming, the number of threads is not explicitly controlled by the user. Instead, the parallelism is determined by the number of partitions in the Kafka topic. Each partition is consumed by a single Spark task.

When you have a Spark Streaming job that reads from Kafka, it creates one Kafka Consumer per partition. If your Kafka topic has 100 partitions, Spark will create 100 tasks (one for each partition) to consume the data. These tasks are distributed across the available cores/executors in your Spark cluster.

In your case, you have 2 executors with 4 cores each, giving you a total of 8 cores. These cores are used to run the tasks. If there are more tasks than cores (as in your case with 100 tasks and 8 cores), the tasks will be queued and executed in sequence on the available cores.

So, in essence, Spark reads from multiple Kafka partitions in parallel, but the level of parallelism is constrained by the number of available cores. You don’t need to manually spin up multiple threads for the Kafka consumer, as Spark handles this for you.

If you want to increase the level of parallelism, you can increase the number of cores or executors in your Spark cluster.

I hope this helps! Let me know if you have any other questions. 😊

 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group