When you have a Spark Streaming job that reads from Kafka, it creates one Kafka Consumer per partition. If your Kafka topic has 100 partitions, Spark will create 100 tasks (one for each partition) to consume the data. These tasks are distributed across the available cores/executors in your Spark cluster.
In your case, you have 2 executors with 4 cores each, giving you a total of 8 cores. These cores are used to run the tasks. If there are more tasks than cores (as in your case with 100 tasks and 8 cores), the tasks will be queued and executed in sequence on the available cores.
So, in essence, Spark reads from multiple Kafka partitions in parallel, but the level of parallelism is constrained by the number of available cores. You donโt need to manually spin up multiple threads for the Kafka consumer, as Spark handles this for you.
If you want to increase the level of parallelism, you can increase the number of cores or executors in your Spark cluster.
I hope this helps! Let me know if you have any other questions. ๐