<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High cost of storage when using structured streaming in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9708#M5027</link>
    <description>&lt;P&gt;&lt;A href="https://community.databricks.com/s/profile/0053f000000WWwvAAG" alt="https://community.databricks.com/s/profile/0053f000000WWwvAAG" target="_blank"&gt;Debayan&lt;/A&gt;, thanks for your recommendation, I read this article, but it does not answer my question.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm just learning how to work with Databricks, and perhaps these costs are normal for structured stream processing?&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 10 Feb 2023 18:42:10 GMT</pubDate>
    <dc:creator>lnights</dc:creator>
    <dc:date>2023-02-10T18:42:10Z</dc:date>
    <item>
      <title>High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9706#M5025</link>
      <description>&lt;P&gt;Hi there, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I read data from Azure Event Hub and after manipulating with data I write the dataframe back to Event Hub (I use &lt;A href="https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md" alt="https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md" target="_blank"&gt;this connector&lt;/A&gt; for that): &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;#read data
df = (spark.readStream 
         .format("eventhubs") 
         .options(**ehConf) 
         .load()
      )
&amp;nbsp;
#some data manipulation 
&amp;nbsp;
#write data
ds = df \
  .select("body", "partitionKey") \
  .writeStream \
  .format("eventhubs") \
  .options(**output_ehConf) \
  .option("checkpointLocation", "/checkpoin/eventhub-to-eventhub/savestate.txt") \
  .trigger(processingTime='1 seconds') \
  .start()&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;In this case, I get high storage costs, which far exceed my computational costs (4 times). The expense is caused by a large number of transactions to the storage:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="transactions in azure storage"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/698i372DC86F458B4367/image-size/large?v=v2&amp;amp;px=999" role="button" title="transactions in azure storage" alt="transactions in azure storage" /&gt;&lt;/span&gt;I tried to reduce the number of transactions by using processingTime as a trigger, but it didn't bring any significant result (for me, a minimal delay is critical).&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Question: am I using structured streaming correctly, and if so, how can I optimize storage costs?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for your time! &lt;/P&gt;</description>
      <pubDate>Wed, 08 Feb 2023 22:12:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9706#M5025</guid>
      <dc:creator>lnights</dc:creator>
      <dc:date>2023-02-08T22:12:28Z</dc:date>
    </item>
    <item>
      <title>Re: High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9708#M5027</link>
      <description>&lt;P&gt;&lt;A href="https://community.databricks.com/s/profile/0053f000000WWwvAAG" alt="https://community.databricks.com/s/profile/0053f000000WWwvAAG" target="_blank"&gt;Debayan&lt;/A&gt;, thanks for your recommendation, I read this article, but it does not answer my question.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm just learning how to work with Databricks, and perhaps these costs are normal for structured stream processing?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Feb 2023 18:42:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9708#M5027</guid>
      <dc:creator>lnights</dc:creator>
      <dc:date>2023-02-10T18:42:10Z</dc:date>
    </item>
    <item>
      <title>Re: High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9709#M5028</link>
      <description>&lt;P&gt;Hi @Serhii Dovhanich​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We'd love to hear from you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Feb 2023 06:50:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9709#M5028</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2023-02-13T06:50:00Z</dc:date>
    </item>
    <item>
      <title>Re: High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9707#M5026</link>
      <description>&lt;P&gt;Hi, Could you please refer &lt;A href="https://www.databricks.com/blog/2022/10/18/best-practices-cost-management-databricks.html" alt="https://www.databricks.com/blog/2022/10/18/best-practices-cost-management-databricks.html" target="_blank"&gt;https://www.databricks.com/blog/2022/10/18/best-practices-cost-management-databricks.html&lt;/A&gt; and let us know if this helps?&lt;/P&gt;</description>
      <pubDate>Fri, 10 Feb 2023 05:19:06 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/9707#M5026</guid>
      <dc:creator>Debayan</dc:creator>
      <dc:date>2023-02-10T05:19:06Z</dc:date>
    </item>
    <item>
      <title>Re: High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/42985#M27459</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I ran into the same problem today. The core of my problem was with an aggregation and join, my stream does not generate massive amounts of data but it still used 200 shuffle partitions. After scaling this down to 2 (you have to clear checkpoints to take effect) my transactions went down significantly. Hope this helps!&lt;/P&gt;</description>
      <pubDate>Thu, 31 Aug 2023 13:22:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/42985#M27459</guid>
      <dc:creator>CKBertrams</dc:creator>
      <dc:date>2023-08-31T13:22:28Z</dc:date>
    </item>
    <item>
      <title>Re: High cost of storage when using structured streaming</title>
      <link>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/42993#M27460</link>
      <description>&lt;P&gt;I had the same problem when starting with databricks. As outlined above, it is the shuffle partitions setting that results in number of files equal to number of partitions. Thus, you are writing low data volume but get taxed on the amount of write (and subsequent sequentialread) operations. Lowering amount of shuffle partitions helps solve this. On top of that, consider using&amp;nbsp;&lt;SPAN&gt;spark.sql.streaming.noDataMicroBatches.enabled&amp;nbsp; so that empty microbatches are ignored.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 31 Aug 2023 14:02:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/high-cost-of-storage-when-using-structured-streaming/m-p/42993#M27460</guid>
      <dc:creator>PetePP</dc:creator>
      <dc:date>2023-08-31T14:02:55Z</dc:date>
    </item>
  </channel>
</rss>

