<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: append using foreach batch autoloader in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132772#M49622</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/65591"&gt;@seefoods&lt;/a&gt;, you've got a line in your code:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;table_name : str = "test"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Surely hard coding that in is going to cause some issues &lt;span class="lia-unicode-emoji" title=":grinning_face:"&gt;😀&lt;/span&gt;.&lt;/P&gt;&lt;P&gt;All the best,&lt;/P&gt;&lt;P&gt;BS&lt;/P&gt;</description>
    <pubDate>Mon, 22 Sep 2025 22:56:52 GMT</pubDate>
    <dc:creator>BS_THE_ANALYST</dc:creator>
    <dc:date>2025-09-22T22:56:52Z</dc:date>
    <item>
      <title>append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132708#M49607</link>
      <description>&lt;P&gt;Hello Guys,&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;when i append i have this error someone knows how to fix it?&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;raise converted from None pyspark.errors.exceptions.captured.AnalysisException: [TABLE_OR_VIEW_ALREADY_EXISTS] Cannot create table or view `s_test` because it already exists. Choose a different name, drop the existing object, add the IF NOT EXISTS clause to tolerate pre-existing objects, add the OR REPLACE clause to replace the existing materialized view, or add the OR REFRESH clause to refresh the existing streaming table. SQLSTATE: 42P07 SQLSTATE: 39000 SQLSTATE: XXKST&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Cordially,&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 15:30:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132708#M49607</guid>
      <dc:creator>seefoods</dc:creator>
      <dc:date>2025-09-22T15:30:27Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132709#M49608</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Root Cause:&lt;/STRONG&gt;&lt;BR /&gt;The table or view s_test already exists in the catalog.&lt;/P&gt;&lt;P&gt;The code tries a CREATE TABLE without IF NOT EXISTS or without dropping the existing table first.&lt;/P&gt;&lt;P&gt;The underlying Spark or SQL engine enforces uniqueness of table names and raises an error if the same name is reused improperly.&lt;/P&gt;&lt;P&gt;Usually if there is a&amp;nbsp;Repeated CREATE TABLE commands in code or pipeline without handling existence.&amp;nbsp; &amp;nbsp;&lt;STRONG&gt;Solution:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;TABLE width="489"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="110"&gt;&lt;P&gt;Approach&lt;/P&gt;&lt;/TD&gt;&lt;TD width="162"&gt;Description&lt;/TD&gt;&lt;TD width="217"&gt;Code Example&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="110"&gt;Use&amp;nbsp;IF NOT EXISTS&lt;/TD&gt;&lt;TD width="162"&gt;Avoid error by conditionally creating table only if it does not exist.&lt;/TD&gt;&lt;TD width="217"&gt;CREATE TABLE IF NOT EXISTS s_test (...)&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="110"&gt;Add&amp;nbsp;OR REPLACE&amp;nbsp;for Views&lt;/TD&gt;&lt;TD width="162"&gt;For materialized views, replace existing with new definition.&lt;/TD&gt;&lt;TD width="217"&gt;CREATE OR REPLACE VIEW s_test AS ...&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="110"&gt;Drop Table Before Create&lt;/TD&gt;&lt;TD width="162"&gt;Explicitly drop existing table before creation, ensures clean slate.&lt;/TD&gt;&lt;TD width="217"&gt;DROP TABLE IF EXISTS s_test; CREATE TABLE s_test (...)&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="110"&gt;Use Spark Write Modes&lt;/TD&gt;&lt;TD width="162"&gt;Use Spark DataFrameWriter&amp;nbsp;mode("append")&amp;nbsp;or&amp;nbsp;mode("overwrite")&amp;nbsp;depending on use case.&lt;/TD&gt;&lt;TD width="217"&gt;df.write.mode("append").saveAsTable("s_test")&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="110"&gt;Use&amp;nbsp;OR REFRESH&amp;nbsp;for Streaming&lt;/TD&gt;&lt;TD width="162"&gt;If streaming table, use&amp;nbsp;OR REFRESH&amp;nbsp;clause if supported to refresh streaming query.&lt;/TD&gt;&lt;TD width="217"&gt;CREATE STREAMING TABLE OR REFRESH s_test (...)&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;</description>
      <pubDate>Mon, 22 Sep 2025 15:41:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132709#M49608</guid>
      <dc:creator>ManojkMohan</dc:creator>
      <dc:date>2025-09-22T15:41:03Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132711#M49610</link>
      <description>&lt;P&gt;Hello guys,&lt;BR /&gt;this is my source code&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class=""&gt;def batch_writer(self, batch_df: DataFrame, batch_id: int):&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;app_id: str = self.spark.sparkContext.applicationId&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;writer = batch_df.write.format("delta")&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;table_name : str = "test"&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;if self.spark.catalog.tableExists(table_name):&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;if self.write_mode.value.lower() == "append":&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;writer = writer.mode("append").option("txnVersion", batch_id).option("txnAppId", app_id)&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;elif self.write_mode.value.lower() == "overwrite":&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;writer = writer.mode("overwrite").option("txnVersion", batch_id).option("txnAppId", app_id)&lt;BR /&gt;&amp;nbsp; &amp;nbsp;else:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;writer = writer.mode("overwrite").option("txnVersion", batch_id).option("txnAppId", app_id)&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;if self.partition_columns:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;writer = writer.partitionBy(*self.partition_columns)&lt;BR /&gt;&amp;nbsp; &amp;nbsp;writer.saveAsTable(f"test")&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;&lt;DIV class=""&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class=""&gt;def _write_streaming_to_delta(self, df: DataFrame, spark: SparkSession = None, *args, **kwargs):&lt;BR /&gt;&amp;nbsp; &amp;nbsp;stream_writer = (df.writeStream.foreachBatch(self.batch_writer))&lt;BR /&gt;&amp;nbsp; &amp;nbsp;if self.write_mode.value.lower() == "append":&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;# Création de la configuration de base du stream&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;(stream_writer.outputMode("append")&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("checkpointLocation", self.checkpoint_location)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("mergeSchema", "true").trigger(once=True))&lt;BR /&gt;&amp;nbsp; &amp;nbsp;elif self.write_mode.value.lower() == "complete":&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;# Création de la configuration de base du stream&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;(stream_writer.outputMode("complete")&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("checkpointLocation", self.checkpoint_location)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("mergeSchema", "true")&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .trigger(once=True))&lt;BR /&gt;&amp;nbsp; &amp;nbsp;elif self.write_mode.value.lower() == "update":&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;(stream_writer.outputMode("update")&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("checkpointLocation", self.checkpoint_location)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .option("mergeSchema", "true")&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; .trigger(once=True))&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp; &amp;nbsp;# Lancement du stream et capture de la référence&lt;BR /&gt;&amp;nbsp; &amp;nbsp;query = stream_writer.start()&lt;BR /&gt;&amp;nbsp; &amp;nbsp;query.awaitTermination()&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 22 Sep 2025 15:46:21 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132711#M49610</guid>
      <dc:creator>seefoods</dc:creator>
      <dc:date>2025-09-22T15:46:21Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132712#M49611</link>
      <description>&lt;P&gt;Solution:&lt;/P&gt;&lt;P&gt;can you try below&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Change logic in creation mode:&lt;/P&gt;&lt;P&gt;Only use .mode("append") or .mode("overwrite") when the table exists.&lt;/P&gt;&lt;P&gt;Use .mode("ignore") or .mode("errorIfExists") (default) appropriately.&lt;/P&gt;&lt;P&gt;Avoid .mode("overwrite"&amp;nbsp; in streaming foreachBatch to prevent dropping the table schema and causing errors.&lt;/P&gt;&lt;P&gt;Create the table explicitly before streaming starts:&lt;/P&gt;&lt;P&gt;Manually create the Delta table once outside the streaming foreachBatch logic.&lt;/P&gt;&lt;P&gt;Inside foreachBatch, only append data or perform merges on the existing table.&lt;/P&gt;&lt;P&gt;Check and clean up checkpoint location if needed:&lt;/P&gt;&lt;P&gt;Sometimes checkpoints get corrupted, causing streaming retries to attempt table creation again.&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 16:00:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132712#M49611</guid>
      <dc:creator>ManojkMohan</dc:creator>
      <dc:date>2025-09-22T16:00:57Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132772#M49622</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/65591"&gt;@seefoods&lt;/a&gt;, you've got a line in your code:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;table_name : str = "test"&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Surely hard coding that in is going to cause some issues &lt;span class="lia-unicode-emoji" title=":grinning_face:"&gt;😀&lt;/span&gt;.&lt;/P&gt;&lt;P&gt;All the best,&lt;/P&gt;&lt;P&gt;BS&lt;/P&gt;</description>
      <pubDate>Mon, 22 Sep 2025 22:56:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132772#M49622</guid>
      <dc:creator>BS_THE_ANALYST</dc:creator>
      <dc:date>2025-09-22T22:56:52Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132824#M49642</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/65591"&gt;@seefoods&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;I'm assuming that you're currently testing your code, so hardcoding table_name was done purposefully.&lt;/P&gt;&lt;P&gt;I guess you have a bug in your code.&amp;nbsp; By default saveAsTable will throw exception if the data already exists. This can be changed using different mode (append, overwrite etc.). So that tells me that something is wrong with the way you're setting below values:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="szymon_dybczak_0-1758625922424.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/20181i5B45AD42185055F0/image-size/medium?v=v2&amp;amp;px=400" role="button" title="szymon_dybczak_0-1758625922424.png" alt="szymon_dybczak_0-1758625922424.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;In my opinion you're setting outputMode on stream_writer object in incorrect way. OutputMode returns new&amp;nbsp;&lt;SPAN&gt;DataStreamWriter object, but you forgot to assign this new DataStreamWriter to your stream_writer variable.&lt;BR /&gt;So, basically the above series of if statements don't have any effect on the output mode and hence default mode is used (which is causing an exception):&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Try to do it in following way:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;if self.write_mode.value.lower() == "append":
    stream_writer = (
        stream_writer.outputMode("append")
          .option("checkpointLocation", self.checkpoint_location)
          .option("mergeSchema", "true")
          .trigger(once=True)
    )
elif self.write_mode.value.lower() == "complete":
    stream_writer = (
        stream_writer.outputMode("complete")
          .option("checkpointLocation", self.checkpoint_location)
          .option("mergeSchema", "true")
          .trigger(once=True)
    )
elif self.write_mode.value.lower() == "update":
    stream_writer = (
        stream_writer.outputMode("update")
          .option("checkpointLocation", self.checkpoint_location)
          .option("mergeSchema", "true")
          .trigger(once=True)
    )

query = stream_writer.start()
query.awaitTermination()&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 11:24:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132824#M49642</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2025-09-23T11:24:02Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132838#M49646</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/110502"&gt;@szymon_dybczak&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Thanks. I have some few questions about trigger options: what's the difference between trigger Once and trigger AvailableNow() ?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 13:15:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132838#M49646</guid>
      <dc:creator>seefoods</dc:creator>
      <dc:date>2025-09-23T13:15:10Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132842#M49647</link>
      <description>&lt;P&gt;Conceptually they're the same. So they will load all available data. But the implementation differs.&lt;BR /&gt;In the case of trigger.Once, Spark Structured Streaming will try to load all available data in a single micro-batch. As you can imagine, if there’s a very large amount of data, this can cause serious issues. That’s why this option is &lt;STRONG&gt;deprecated&lt;/STRONG&gt;.&lt;BR /&gt;trigger.AvailableNow will also load all available data, but by using a series of micro-batches. In the end, the effect will be the same, but with AvailableNow you won’t risk crashing the cluster when trying to load a massive amount of data &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 13:33:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132842#M49647</guid>
      <dc:creator>szymon_dybczak</dc:creator>
      <dc:date>2025-09-23T13:33:16Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132846#M49648</link>
      <description>&lt;P&gt;Thanks a lot&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/110502"&gt;@szymon_dybczak&lt;/a&gt;&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:"&gt;😊&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 13:43:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132846#M49648</guid>
      <dc:creator>seefoods</dc:creator>
      <dc:date>2025-09-23T13:43:31Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132850#M49651</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/146924"&gt;@BS_THE_ANALYST&lt;/a&gt;&amp;nbsp;its just an example&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":beaming_face_with_smiling_eyes:"&gt;😁&lt;/span&gt;&amp;nbsp;for sure !!&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 14:07:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132850#M49651</guid>
      <dc:creator>seefoods</dc:creator>
      <dc:date>2025-09-23T14:07:04Z</dc:date>
    </item>
    <item>
      <title>Re: append using foreach batch autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132862#M49659</link>
      <description>&lt;P&gt;I've been caught out more times than I'd like to admit with the hardcoded tests causing issues &lt;span class="lia-unicode-emoji" title=":rolling_on_the_floor_laughing:"&gt;🤣&lt;/span&gt;.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Glad your issue got resolved! Best of luck with the project. Would love to hear more about it once you've finished!&lt;BR /&gt;&lt;BR /&gt;All the best,&lt;BR /&gt;BS&lt;/P&gt;</description>
      <pubDate>Tue, 23 Sep 2025 15:54:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/append-using-foreach-batch-autoloader/m-p/132862#M49659</guid>
      <dc:creator>BS_THE_ANALYST</dc:creator>
      <dc:date>2025-09-23T15:54:37Z</dc:date>
    </item>
  </channel>
</rss>

