<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: DLT Flow Redeclaration Error After Service Upgrade in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/dlt-flow-redeclaration-error-after-service-upgrade/m-p/133705#M49906</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/188367"&gt;@DiskoSuperStar&lt;/a&gt;&amp;nbsp;IT seems y&lt;SPAN&gt;ou’ve run into a&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;recently enforced change&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;in Databricks DLT/Lakeflow:&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;Multiple flows (append or otherwise) targeting the same table must have unique names. actually it looks correct on your code.&amp;nbsp;Check if your&lt;SPAN&gt;&amp;nbsp; table_info&amp;nbsp;&lt;/SPAN&gt;list contains duplicate entries for the same table and flow type.&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Assign unique, deterministic names to each flow per table and flow type.&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Do not change the flow name logic between pipeline runs.&lt;/STRONG&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Can you also&amp;nbsp;&lt;SPAN&gt;Reset/refresh pipeline to clear stale state and checkpoints (In the pipeline details page, look for the&amp;nbsp;"Reset"&amp;nbsp;or&amp;nbsp;"Full Refresh")??&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 03 Oct 2025 14:17:40 GMT</pubDate>
    <dc:creator>saurabh18cs</dc:creator>
    <dc:date>2025-10-03T14:17:40Z</dc:date>
    <item>
      <title>DLT Flow Redeclaration Error After Service Upgrade</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-flow-redeclaration-error-after-service-upgrade/m-p/133536#M49877</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;our delta live tables(Lakeflow declarative pipelines) pipeline started failing after the Sep 30 / Oct 1 service upgrade with the following error :&lt;/P&gt;&lt;PRE&gt;AnalysisException: Cannot have multiple queries named `&amp;lt;table_name&amp;gt;_realtime_flow` for `&amp;lt;table_name&amp;gt;`.
Additional queries on that table must be named. Note that unnamed queries default
to the same name as the table.&lt;/PRE&gt;&lt;P&gt;We define multiple append flows dynamically in a loop, e.g. :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;for config in table_info:
    table_name = config["table_name"]
    path = f"{config['conn_details'].rstrip('/')}/{config['table_name']}/"
    load_type = config["load_type"]

    # create the streaming table
    try:
        dlt.create_streaming_table(name=table_name, comment=f"Raw {table_name} data from S3")
    except Exception as e:
        print("Table already exists")

    if load_type == "DLT":
        # create regular streaming flow
        @Dlt.append_flow(
            target=table_name,
            name=f"{table_name}_realtime_flow",
            comment=f"Raw streaming {table_name} data from S3")
        def _ingest_dynamic_table(table_path=path):
            return (
                spark.readStream
                .format("cloudFiles")
                .option("cloudFiles.format", "json")
                .load(table_path)
            )

    if load_type == "Full_load":
        # create one time append flow
        @Dlt.append_flow(
            target = table_name,
            name = f"{table_name}_ingest_history_flow",
            once = True,
            comment=f"Raw historical {table_name} data from S3"
        )
        def _ingest_historic_table(table_path=path):
            return (
                spark.read
                .format("json")
                .option("recursiveFileLookup", "true")
                .load(table_path)
                .withColumn("md5OfBody", md5(col("body")))
                .withColumn("ingest_date", to_date(col("ingest_ts")).cast("string"))
                .withColumn("hour", hour(col("ingest_ts")).cast("string"))
                .drop("ingest_ts")
            )&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This pattern worked fine before (and matches the official docs example with Kafka topics).&lt;BR /&gt;&lt;A href="https://docs.databricks.com/aws/en/dlt/flow-examples" target="_blank" rel="nofollow noopener noreferrer"&gt;https://docs.databricks.com/aws/en/dlt/flow-examples&lt;/A&gt;&amp;nbsp;&amp;gt; second code snippet&lt;/P&gt;&lt;P&gt;The only workaround right now is full refresh or adding unique suffixes to flow names (but that breaks checkpoint resumption).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. Has anyone else hit this since the last service upgrade? My pipeline just stopped and restarted itself by the service_upgrade and haven't been able to make it work since(since October 1st, yesterday).&lt;/P&gt;&lt;P&gt;2. Is this documented behaviour change (enforcement of unique flow names, because i cant seem to find the documentation to support that)?&lt;/P&gt;&lt;P&gt;3. Whats the recommended restart-safe patter if we want stable flow names(to resume from checkpoints) , and not needing to do a full-refresh every time we restart the pipeline?&lt;/P&gt;&lt;P&gt;Any advice would be greatly appreciated!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Oct 2025 12:17:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-flow-redeclaration-error-after-service-upgrade/m-p/133536#M49877</guid>
      <dc:creator>DiskoSuperStar</dc:creator>
      <dc:date>2025-10-02T12:17:57Z</dc:date>
    </item>
    <item>
      <title>Re: DLT Flow Redeclaration Error After Service Upgrade</title>
      <link>https://community.databricks.com/t5/data-engineering/dlt-flow-redeclaration-error-after-service-upgrade/m-p/133705#M49906</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/188367"&gt;@DiskoSuperStar&lt;/a&gt;&amp;nbsp;IT seems y&lt;SPAN&gt;ou’ve run into a&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;recently enforced change&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;in Databricks DLT/Lakeflow:&lt;/SPAN&gt;&lt;BR /&gt;&lt;STRONG&gt;Multiple flows (append or otherwise) targeting the same table must have unique names. actually it looks correct on your code.&amp;nbsp;Check if your&lt;SPAN&gt;&amp;nbsp; table_info&amp;nbsp;&lt;/SPAN&gt;list contains duplicate entries for the same table and flow type.&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Assign unique, deterministic names to each flow per table and flow type.&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Do not change the flow name logic between pipeline runs.&lt;/STRONG&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Can you also&amp;nbsp;&lt;SPAN&gt;Reset/refresh pipeline to clear stale state and checkpoints (In the pipeline details page, look for the&amp;nbsp;"Reset"&amp;nbsp;or&amp;nbsp;"Full Refresh")??&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 03 Oct 2025 14:17:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/dlt-flow-redeclaration-error-after-service-upgrade/m-p/133705#M49906</guid>
      <dc:creator>saurabh18cs</dc:creator>
      <dc:date>2025-10-03T14:17:40Z</dc:date>
    </item>
  </channel>
</rss>

