<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to run sparkStream for earlier (not future messages) in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-run-sparkstream-for-earlier-not-future-messages/m-p/19620#M13174</link>
    <description>&lt;P&gt;Hi, I'm listening to a stream for kinesis, don't need the data in real-time, so I could run it on an hourly basis looking to achieve two things:&lt;/P&gt;&lt;P&gt;-Save money by don't have a cluster up 24/7&lt;/P&gt;&lt;P&gt;-Have bigger files saved for each read&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The stream is constant so I cant use once=True because it never ends, that it what I use to read from buckets. The idea is that it reads to the last data available at the moment it started and then gracefully exits.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can this be done?&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 25 May 2022 21:33:17 GMT</pubDate>
    <dc:creator>alejandrofm</dc:creator>
    <dc:date>2022-05-25T21:33:17Z</dc:date>
    <item>
      <title>How to run sparkStream for earlier (not future messages)</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-run-sparkstream-for-earlier-not-future-messages/m-p/19620#M13174</link>
      <description>&lt;P&gt;Hi, I'm listening to a stream for kinesis, don't need the data in real-time, so I could run it on an hourly basis looking to achieve two things:&lt;/P&gt;&lt;P&gt;-Save money by don't have a cluster up 24/7&lt;/P&gt;&lt;P&gt;-Have bigger files saved for each read&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The stream is constant so I cant use once=True because it never ends, that it what I use to read from buckets. The idea is that it reads to the last data available at the moment it started and then gracefully exits.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can this be done?&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 May 2022 21:33:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-run-sparkstream-for-earlier-not-future-messages/m-p/19620#M13174</guid>
      <dc:creator>alejandrofm</dc:creator>
      <dc:date>2022-05-25T21:33:17Z</dc:date>
    </item>
    <item>
      <title>Re: How to run sparkStream for earlier (not future messages)</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-run-sparkstream-for-earlier-not-future-messages/m-p/19622#M13176</link>
      <description>&lt;P&gt;Hi, thanks for the link, the solution in that thread is for Kafka and It seems to be a different issue.&lt;/P&gt;&lt;P&gt;I need to stop the process when I reach the events that were available at the time I started listening, so the process will complete NOT when the queue is empty but when I reached a specific point in time on the queue.&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2022 18:15:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-run-sparkstream-for-earlier-not-future-messages/m-p/19622#M13176</guid>
      <dc:creator>alejandrofm</dc:creator>
      <dc:date>2022-05-26T18:15:00Z</dc:date>
    </item>
  </channel>
</rss>

