<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Lakeflow SDP partition error in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156498#M54435</link>
    <description>&lt;P&gt;Hi,&lt;BR /&gt;I was trying to log an exception in Lakeflow SDP , firstly I am creating an empty streaming dataframe in case of exception and writing log into audit table as shown below&lt;/P&gt;&lt;LI-CODE lang="python"&gt;try:
	raise Exception("testexception")
	return df
except Exception as e:
	df=spark. createDataFrame([{f"error _msg": str (e) }], schema="error_msg
	string")
	df.write.insertInto("cat.sch.tbl_stg_tst_audit")

	df=spark.readstream.format("rate").load()
	df=df.select(*[lit (None).cast(coltype).alias(colname) for colname, coltype in tb_schema l)
	df=df.where("1==0")
	return df&lt;/LI-CODE&gt;&lt;P&gt;but was facing below error in case of write to original table&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Category: Error
Message: The number of partitions
(0) used in previous microbatch is different from the current number of partitions (8). There could be two possible reasons:
1. Option "numpartitions" of the rate source gets changed during query restart.
2. The size of the cluster might change during query restart.
Explicitly set option
"numpartitions" of the rate source to 0 to fix this issue.
Error class:
STREAMING_RATE_SOURCE_V2_PARTITION_NUM_CHANGE_UNSUPPORTED&lt;/LI-CODE&gt;&lt;P&gt;Could anyone please help with this issue&lt;/P&gt;</description>
    <pubDate>Sat, 09 May 2026 17:40:16 GMT</pubDate>
    <dc:creator>IM_01</dc:creator>
    <dc:date>2026-05-09T17:40:16Z</dc:date>
    <item>
      <title>Lakeflow SDP partition error</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156498#M54435</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;I was trying to log an exception in Lakeflow SDP , firstly I am creating an empty streaming dataframe in case of exception and writing log into audit table as shown below&lt;/P&gt;&lt;LI-CODE lang="python"&gt;try:
	raise Exception("testexception")
	return df
except Exception as e:
	df=spark. createDataFrame([{f"error _msg": str (e) }], schema="error_msg
	string")
	df.write.insertInto("cat.sch.tbl_stg_tst_audit")

	df=spark.readstream.format("rate").load()
	df=df.select(*[lit (None).cast(coltype).alias(colname) for colname, coltype in tb_schema l)
	df=df.where("1==0")
	return df&lt;/LI-CODE&gt;&lt;P&gt;but was facing below error in case of write to original table&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Category: Error
Message: The number of partitions
(0) used in previous microbatch is different from the current number of partitions (8). There could be two possible reasons:
1. Option "numpartitions" of the rate source gets changed during query restart.
2. The size of the cluster might change during query restart.
Explicitly set option
"numpartitions" of the rate source to 0 to fix this issue.
Error class:
STREAMING_RATE_SOURCE_V2_PARTITION_NUM_CHANGE_UNSUPPORTED&lt;/LI-CODE&gt;&lt;P&gt;Could anyone please help with this issue&lt;/P&gt;</description>
      <pubDate>Sat, 09 May 2026 17:40:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156498#M54435</guid>
      <dc:creator>IM_01</dc:creator>
      <dc:date>2026-05-09T17:40:16Z</dc:date>
    </item>
    <item>
      <title>Re: Lakeflow SDP partition error</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156508#M54436</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/193958"&gt;@IM_01&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;I think that your issue is caused by using the rate source as a dummy empty stream.&lt;/P&gt;&lt;P&gt;The rate source stores its partition count in the streaming checkpoint and because numPartitions was not explicitly set it can change between runs depending on cluster size or default parallelism which causes STREAMING_RATE_SOURCE_V2_PARTITION_NUM_CHANGE_UNSUPPORTED.&lt;/P&gt;&lt;P&gt;However, I would not recommend this pattern in Lakeflow SDP. A pipeline table function should return a dataframe only and it should not write to an audit table with df.write.insertInto() because dataset definitions may be analyzed or even retried multiple times and side effects can be duplicated or fail unexpectedly.&lt;/P&gt;&lt;P&gt;So let the pipeline fail normally and use the Lakeflow pipeline event log for monitoring and if&amp;nbsp;needed, you can create a separate monitoring job or query or event hook to copy error events into your audit table.&lt;/P&gt;&lt;P&gt;If you still need a workaround for the empty stream, you can explicitly set numPartitions on the rate source and keep it fixed for the lifetime of the checkpoint :&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;df = (
spark.readStream
.format("rate")
.option("numPartitions", "1")
.option("rowsPerSecond", "1")
.load()
)

df = df.select(
*[F.lit(None).cast(coltype).alias(colname) for colname, coltype in tb_schema]
).where("false")

return df&lt;/LI-CODE&gt;&lt;P&gt;But if the existing checkpoint already stored numPartitions = 0 you should either set it to the value shown in the error message or do a full refresg on the pipeline before changing it to a new stable value.&lt;/P&gt;</description>
      <pubDate>Sun, 10 May 2026 14:51:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156508#M54436</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-05-10T14:51:55Z</dc:date>
    </item>
    <item>
      <title>Re: Lakeflow SDP partition error</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156561#M54447</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/226887"&gt;@amirabedhiafi&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the response&amp;nbsp;‌‌&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I am already using the event_hook to capture events of type- flow_definition &amp;amp; flow_Progress&amp;nbsp;&lt;/P&gt;&lt;P&gt;However I started wondering whether if the exception is handled will the exception would be still captured in audit table used by the event hook.&lt;BR /&gt;To ensure that I do not miss any exceptions, I was thinking of maintaining the audit table&lt;BR /&gt;for logging exceptions in dlt table decorator function as well&lt;/P&gt;&lt;P&gt;Please let me know if this approach would work , Please feel free to suggest if u see any loopholes in this approach or if you know of better approach Amira&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 10:30:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156561#M54447</guid>
      <dc:creator>IM_01</dc:creator>
      <dc:date>2026-05-11T10:30:20Z</dc:date>
    </item>
    <item>
      <title>Re: Lakeflow SDP partition error</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156564#M54449</link>
      <description>&lt;P&gt;Hi again !&lt;/P&gt;&lt;P&gt;Yes, if you handle the exception, the event hook may miss it as a pipeline failure. But I think the fix should be reraising the exception and audit from the event log&amp;nbsp;not write manually inside the table "decorator" function.&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 10:52:46 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-sdp-partition-error/m-p/156564#M54449</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-05-11T10:52:46Z</dc:date>
    </item>
  </channel>
</rss>

