cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Autoloader fails when creating external Delta table in same notebook

yit
Contributor

Hi everyone,

I’ve set up Databricks Autoloader to ingest data from ADLS into a Delta table. The table is defined as an external Delta table, with its location pointing to a path in ADLS.

Here’s the flow I’m using:

  • On the first run for a given data source, I create the external Delta table in a notebook.

  • Immediately after, I invoke Autoloader (within the same notebook) to start streaming data into the table.

However, I often (but not always) encounter the following error on the first run

 
Failed to write to the schema log at location abfss://{container}@{storage_account}.dfs.core.windows.net/my-path/schema. SQLSTATE: XXKST

As a workaround, I tried splitting the process:

  • I run one notebook to create the external table.

  • Then, I run another notebook separately to start the Autoloader.

With this approach, the error does not occur.

 

What could be causing this intermittent schema log write failure when creating the table and starting Autoloader in the same notebook? Is this a timing or locking issue due to table creation and Autoloader initialization being too close together?

 

2 ACCEPTED SOLUTIONS

Accepted Solutions

lingareddy_Alva
Honored Contributor III

Hi @yit 

This is a classic timing and metadata synchronization issue between Delta table creation and Autoloader initialization.
Here's what's happening and how to fix it.

The error occurs because:
Delta table creation writes initial metadata to the _delta_log directory
Autoloader schema inference tries to write to the same metadata location almost simultaneously
ADLS eventual consistency can cause conflicts when operations happen too quickly
Metastore synchronization may not be complete when Autoloader starts.

Add Explicit Wait/Validation:

import time
from delta.tables import DeltaTable

def create_table_and_wait(table_name, table_location):
    """Create table and ensure it's ready for Autoloader"""
    
    # Create the external Delta table
    spark.sql(f"""
        CREATE TABLE IF NOT EXISTS {table_name} (
            -- your schema here
        ) USING DELTA
        LOCATION '{table_location}'
    """)
    
    # Wait for table creation to complete
    time.sleep(5)
    
    # Validate table is accessible and metadata is ready
    max_retries = 10
    for attempt in range(max_retries):
        try:
            # Try to access the Delta table metadata
            delta_table = DeltaTable.forPath(spark, table_location)
            table_version = delta_table.history(1).collect()[0].version
            print(f"Table ready at version {table_version}")
            break
        except Exception as e:
            if attempt < max_retries - 1:
                print(f"Waiting for table metadata... attempt {attempt + 1}")
                time.sleep(2)
            else:
                raise Exception(f"Table not ready after {max_retries} attempts: {e}")
    
    # Additional validation - ensure directory structure exists
    try:
        dbutils.fs.ls(f"{table_location}/_delta_log/")
        print("Delta log directory confirmed")
    except:
        time.sleep(3)  # Additional wait if needed

# Usage
create_table_and_wait("my_catalog.my_schema.my_table", "abfss://container@storage.dfs.core.windows.net/my-path/")

# Now start Autoloader
autoloader_stream = spark.readStream \
    .format("cloudFiles") \
    .option("cloudFiles.format", "parquet") \
    .load("source_path") \
    .writeStream \
    .option("checkpointLocation", "checkpoint_path") \
    .toTable("my_catalog.my_schema.my_table")

 

LR

View solution in original post

yit
Contributor

Thank you for your response!

I've tried something similar, added time.sleep(10) between table creation and autoloader initialization, but it did not work.

What worked was separating the table creation and the autoloader initialization into different cells in the Databricks notebook. I’ll mark your response as the accepted solution, but I’ll also include mine in case someone else finds it useful.

Still, accepting your reply as solution, and writing mine, as someone mind find them useful.

View solution in original post

2 REPLIES 2

lingareddy_Alva
Honored Contributor III

Hi @yit 

This is a classic timing and metadata synchronization issue between Delta table creation and Autoloader initialization.
Here's what's happening and how to fix it.

The error occurs because:
Delta table creation writes initial metadata to the _delta_log directory
Autoloader schema inference tries to write to the same metadata location almost simultaneously
ADLS eventual consistency can cause conflicts when operations happen too quickly
Metastore synchronization may not be complete when Autoloader starts.

Add Explicit Wait/Validation:

import time
from delta.tables import DeltaTable

def create_table_and_wait(table_name, table_location):
    """Create table and ensure it's ready for Autoloader"""
    
    # Create the external Delta table
    spark.sql(f"""
        CREATE TABLE IF NOT EXISTS {table_name} (
            -- your schema here
        ) USING DELTA
        LOCATION '{table_location}'
    """)
    
    # Wait for table creation to complete
    time.sleep(5)
    
    # Validate table is accessible and metadata is ready
    max_retries = 10
    for attempt in range(max_retries):
        try:
            # Try to access the Delta table metadata
            delta_table = DeltaTable.forPath(spark, table_location)
            table_version = delta_table.history(1).collect()[0].version
            print(f"Table ready at version {table_version}")
            break
        except Exception as e:
            if attempt < max_retries - 1:
                print(f"Waiting for table metadata... attempt {attempt + 1}")
                time.sleep(2)
            else:
                raise Exception(f"Table not ready after {max_retries} attempts: {e}")
    
    # Additional validation - ensure directory structure exists
    try:
        dbutils.fs.ls(f"{table_location}/_delta_log/")
        print("Delta log directory confirmed")
    except:
        time.sleep(3)  # Additional wait if needed

# Usage
create_table_and_wait("my_catalog.my_schema.my_table", "abfss://container@storage.dfs.core.windows.net/my-path/")

# Now start Autoloader
autoloader_stream = spark.readStream \
    .format("cloudFiles") \
    .option("cloudFiles.format", "parquet") \
    .load("source_path") \
    .writeStream \
    .option("checkpointLocation", "checkpoint_path") \
    .toTable("my_catalog.my_schema.my_table")

 

LR

yit
Contributor

Thank you for your response!

I've tried something similar, added time.sleep(10) between table creation and autoloader initialization, but it did not work.

What worked was separating the table creation and the autoloader initialization into different cells in the Databricks notebook. I’ll mark your response as the accepted solution, but I’ll also include mine in case someone else finds it useful.

Still, accepting your reply as solution, and writing mine, as someone mind find them useful.