โ08-07-2025 12:49 AM
Hi everyone,
Iโve set up Databricks Autoloader to ingest data from ADLS into a Delta table. The table is defined as an external Delta table, with its location pointing to a path in ADLS.
Hereโs the flow Iโm using:
On the first run for a given data source, I create the external Delta table in a notebook.
Immediately after, I invoke Autoloader (within the same notebook) to start streaming data into the table.
However, I often (but not always) encounter the following error on the first run
As a workaround, I tried splitting the process:
I run one notebook to create the external table.
Then, I run another notebook separately to start the Autoloader.
With this approach, the error does not occur.
What could be causing this intermittent schema log write failure when creating the table and starting Autoloader in the same notebook? Is this a timing or locking issue due to table creation and Autoloader initialization being too close together?
โ08-07-2025 08:49 PM
Hi @yit
This is a classic timing and metadata synchronization issue between Delta table creation and Autoloader initialization.
Here's what's happening and how to fix it.
The error occurs because:
Delta table creation writes initial metadata to the _delta_log directory
Autoloader schema inference tries to write to the same metadata location almost simultaneously
ADLS eventual consistency can cause conflicts when operations happen too quickly
Metastore synchronization may not be complete when Autoloader starts.
Add Explicit Wait/Validation:
import time
from delta.tables import DeltaTable
def create_table_and_wait(table_name, table_location):
"""Create table and ensure it's ready for Autoloader"""
# Create the external Delta table
spark.sql(f"""
CREATE TABLE IF NOT EXISTS {table_name} (
-- your schema here
) USING DELTA
LOCATION '{table_location}'
""")
# Wait for table creation to complete
time.sleep(5)
# Validate table is accessible and metadata is ready
max_retries = 10
for attempt in range(max_retries):
try:
# Try to access the Delta table metadata
delta_table = DeltaTable.forPath(spark, table_location)
table_version = delta_table.history(1).collect()[0].version
print(f"Table ready at version {table_version}")
break
except Exception as e:
if attempt < max_retries - 1:
print(f"Waiting for table metadata... attempt {attempt + 1}")
time.sleep(2)
else:
raise Exception(f"Table not ready after {max_retries} attempts: {e}")
# Additional validation - ensure directory structure exists
try:
dbutils.fs.ls(f"{table_location}/_delta_log/")
print("Delta log directory confirmed")
except:
time.sleep(3) # Additional wait if needed
# Usage
create_table_and_wait("my_catalog.my_schema.my_table", "abfss://container@storage.dfs.core.windows.net/my-path/")
# Now start Autoloader
autoloader_stream = spark.readStream \
.format("cloudFiles") \
.option("cloudFiles.format", "parquet") \
.load("source_path") \
.writeStream \
.option("checkpointLocation", "checkpoint_path") \
.toTable("my_catalog.my_schema.my_table")
โ08-08-2025 08:01 AM
Thank you for your response!
I've tried something similar, added time.sleep(10) between table creation and autoloader initialization, but it did not work.
What worked was separating the table creation and the autoloader initialization into different cells in the Databricks notebook. Iโll mark your response as the accepted solution, but Iโll also include mine in case someone else finds it useful.
Still, accepting your reply as solution, and writing mine, as someone mind find them useful.
โ08-07-2025 08:49 PM
Hi @yit
This is a classic timing and metadata synchronization issue between Delta table creation and Autoloader initialization.
Here's what's happening and how to fix it.
The error occurs because:
Delta table creation writes initial metadata to the _delta_log directory
Autoloader schema inference tries to write to the same metadata location almost simultaneously
ADLS eventual consistency can cause conflicts when operations happen too quickly
Metastore synchronization may not be complete when Autoloader starts.
Add Explicit Wait/Validation:
import time
from delta.tables import DeltaTable
def create_table_and_wait(table_name, table_location):
"""Create table and ensure it's ready for Autoloader"""
# Create the external Delta table
spark.sql(f"""
CREATE TABLE IF NOT EXISTS {table_name} (
-- your schema here
) USING DELTA
LOCATION '{table_location}'
""")
# Wait for table creation to complete
time.sleep(5)
# Validate table is accessible and metadata is ready
max_retries = 10
for attempt in range(max_retries):
try:
# Try to access the Delta table metadata
delta_table = DeltaTable.forPath(spark, table_location)
table_version = delta_table.history(1).collect()[0].version
print(f"Table ready at version {table_version}")
break
except Exception as e:
if attempt < max_retries - 1:
print(f"Waiting for table metadata... attempt {attempt + 1}")
time.sleep(2)
else:
raise Exception(f"Table not ready after {max_retries} attempts: {e}")
# Additional validation - ensure directory structure exists
try:
dbutils.fs.ls(f"{table_location}/_delta_log/")
print("Delta log directory confirmed")
except:
time.sleep(3) # Additional wait if needed
# Usage
create_table_and_wait("my_catalog.my_schema.my_table", "abfss://container@storage.dfs.core.windows.net/my-path/")
# Now start Autoloader
autoloader_stream = spark.readStream \
.format("cloudFiles") \
.option("cloudFiles.format", "parquet") \
.load("source_path") \
.writeStream \
.option("checkpointLocation", "checkpoint_path") \
.toTable("my_catalog.my_schema.my_table")
โ08-08-2025 08:01 AM
Thank you for your response!
I've tried something similar, added time.sleep(10) between table creation and autoloader initialization, but it did not work.
What worked was separating the table creation and the autoloader initialization into different cells in the Databricks notebook. Iโll mark your response as the accepted solution, but Iโll also include mine in case someone else finds it useful.
Still, accepting your reply as solution, and writing mine, as someone mind find them useful.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now