Hi IM_01,
You can set pipelines.reset.allowed as a table property directly in your pipeline definition. The approach depends on whether you are using Python or SQL:
Python:
@dlt.table(
table_properties={"pipelines.reset.allowed": "true"}
)
def my_streaming_table():
return (
spark.readStream.format("cloudFiles")
.option("cloudFiles.format", "json")
.load("/path/to/data")
)
SQL:
CREATE OR REFRESH STREAMING TABLE my_streaming_table
TBLPROPERTIES ("pipelines.reset.allowed" = "true")
AS SELECT * FROM STREAM(LIVE.source_table);
Setting this to "true" allows a full refresh of that specific table, which is what you need to resolve the DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_LOG error. Once you set the property and do a full refresh to pick up the schema change, you may want to set it back to "false" afterward to prevent accidental full refreshes in production.
More detail on pipeline table properties is available in the docs: Pipeline table properties.
Sources: