cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance issue when using structured streaming

pvaz
New Contributor II

Hi databricks community! Let me first apology for the long post.

I'm implementing a system in databricks to read from a kafka stream into the bronze layer of a delta table. The idea is to do some operations on the data that is coming from kafka, mainly filtering and parsing its content to a delta table in unity catalog. To do that I'm using spark structured streaming.

The problem is that I fell that I'm doing something wrong because the number of message that I can process per second seems to low to me. Let me get into the details.

I have a kafka topic that receives a baseline of 300k messages per minute (~ 6MB ) with peaks up to 10M messages per minute. This topic has 8 partitions.

Then I have a job compute cluster with the following configurations:
- Databricks runtime 15.4 LTS
- Worker type Standard_F8 min workers 1 max workers 4
- Driver type Standard_F8

In the cluster I only run a task which takes the data from the kafka cluster, does some filtering operations, including one from_json operation, and stores the data to a unity table. The structured stream is set to be triggered every minute and has the following configurations:

"spark.sql.streaming.noDataMicroBatches.enabled": "false",
"spark.sql.shuffle.partitions": "64",
"spark.databricks.streaming.statefulOperator.asyncCheckpoint.enabled": "true",
"spark.sql.streaming.stateStore.providerClass": "com.databricks.sql.streaming.state.RocksDBStateStoreProvider",
 "spark.sql.streaming.statefulOperator.stateRebalancing.enabled": "true",
 "spark.sql.streaming.stateStore.rocksdb.changelogCheckpointing.enabled": "true"
"
maxOffsetsPerTrigger": 10000000
All the other properties are the default values.
 
I have set a the maxOffsetsPerTrigger in order to prevent out of memory issues in the cluster.

Right now, with those configurations, I can at maximum process about 4M messages per minute. This means that the stream that should run every minute takes more than two minutes to complete. What is strange is that only two nodes of the job compute are active (32GB, 16 cores) with CPU on 10%.

Although this is enough during normal operations I have a lot of unprocessed messages in the back log that I would like to process faster. Does this throughput seems reasonable to you? It feels like I just need to process so little data and even this is not working. Is there anything I can do to improve the performance of this kafka consumer. Thank you for your help.
0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group