<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Should/Can I use spark streaming for Batch workloads? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21302#M14502</link>
    <description>&lt;P&gt;The streaming checkpoint mechanism is independent of the Trigger type. The way checkpoint works are it creates an offset file when processing the batch and once the batch is completed it creates a commit file for that batch in the checkpoint directory. Irrespective of the Trigger type, whenever a batch starts it will first reconcile the offset and checkpoint directory to identify where it has to resume. &lt;/P&gt;&lt;P&gt;These files are human-readable and can be seen in the checkpoint directory. &lt;/P&gt;</description>
    <pubDate>Thu, 24 Jun 2021 13:52:51 GMT</pubDate>
    <dc:creator>brickster_2018</dc:creator>
    <dc:date>2021-06-24T13:52:51Z</dc:date>
    <item>
      <title>Should/Can I use spark streaming for Batch workloads?</title>
      <link>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21300#M14500</link>
      <description>&lt;P&gt;Its preferable to use spark streaming (with Delta) for batch workloads rather then regular batch. With the trigger.once trigger whenever the streaming job is started it will process whatever is available in the source (kafka/kinesis/File System) and keep track of the progress in the streaming checkpoint location. So after it succeeds when you run the job again it will leverage checkpoint to know where to start from.&lt;/P&gt;</description>
      <pubDate>Wed, 23 Jun 2021 19:50:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21300#M14500</guid>
      <dc:creator>User16783855534</dc:creator>
      <dc:date>2021-06-23T19:50:12Z</dc:date>
    </item>
    <item>
      <title>Re: Should/Can I use spark streaming for Batch workloads?</title>
      <link>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21301#M14501</link>
      <description>&lt;P&gt;﻿Its preferable to use spark streaming (with Delta) for batch workloads rather then regular batch. With the trigger.once trigger whenever the streaming job is started it will process whatever is available in the source (kafka/kinesis/File System) and keep track of the progress in the streaming checkpoint location. So after it succeeds when you run the job again it will leverage checkpoint to know where to start from&lt;/P&gt;</description>
      <pubDate>Thu, 24 Jun 2021 01:05:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21301#M14501</guid>
      <dc:creator>User16826994223</dc:creator>
      <dc:date>2021-06-24T01:05:03Z</dc:date>
    </item>
    <item>
      <title>Re: Should/Can I use spark streaming for Batch workloads?</title>
      <link>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21302#M14502</link>
      <description>&lt;P&gt;The streaming checkpoint mechanism is independent of the Trigger type. The way checkpoint works are it creates an offset file when processing the batch and once the batch is completed it creates a commit file for that batch in the checkpoint directory. Irrespective of the Trigger type, whenever a batch starts it will first reconcile the offset and checkpoint directory to identify where it has to resume. &lt;/P&gt;&lt;P&gt;These files are human-readable and can be seen in the checkpoint directory. &lt;/P&gt;</description>
      <pubDate>Thu, 24 Jun 2021 13:52:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/should-can-i-use-spark-streaming-for-batch-workloads/m-p/21302#M14502</guid>
      <dc:creator>brickster_2018</dc:creator>
      <dc:date>2021-06-24T13:52:51Z</dc:date>
    </item>
  </channel>
</rss>

