cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

High cost of storage when using structured streaming

lnights
New Contributor II

Hi there,

I read data from Azure Event Hub and after manipulating with data I write the dataframe back to Event Hub (I use this connector for that):

#read data
df = (spark.readStream 
         .format("eventhubs") 
         .options(**ehConf) 
         .load()
      )
 
#some data manipulation 
 
#write data
ds = df \
  .select("body", "partitionKey") \
  .writeStream \
  .format("eventhubs") \
  .options(**output_ehConf) \
  .option("checkpointLocation", "/checkpoin/eventhub-to-eventhub/savestate.txt") \
  .trigger(processingTime='1 seconds') \
  .start()

In this case, I get high storage costs, which far exceed my computational costs (4 times). The expense is caused by a large number of transactions to the storage:

transactions in azure storageI tried to reduce the number of transactions by using processingTime as a trigger, but it didn't bring any significant result (for me, a minimal delay is critical). 

Question: am I using structured streaming correctly, and if so, how can I optimize storage costs? 

Thank you for your time!

5 REPLIES 5

Debayan
Databricks Employee
Databricks Employee

Hi, Could you please refer https://www.databricks.com/blog/2022/10/18/best-practices-cost-management-databricks.html and let us know if this helps?

lnights
New Contributor II

Debayan, thanks for your recommendation, I read this article, but it does not answer my question. 

I'm just learning how to work with Databricks, and perhaps these costs are normal for structured stream processing? 

Anonymous
Not applicable

Hi @Serhii Dovhanichโ€‹ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

CKBertrams
New Contributor III

Hi,

I ran into the same problem today. The core of my problem was with an aggregation and join, my stream does not generate massive amounts of data but it still used 200 shuffle partitions. After scaling this down to 2 (you have to clear checkpoints to take effect) my transactions went down significantly. Hope this helps!

PetePP
New Contributor II

I had the same problem when starting with databricks. As outlined above, it is the shuffle partitions setting that results in number of files equal to number of partitions. Thus, you are writing low data volume but get taxed on the amount of write (and subsequent sequentialread) operations. Lowering amount of shuffle partitions helps solve this. On top of that, consider using spark.sql.streaming.noDataMicroBatches.enabled  so that empty microbatches are ignored.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group