cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Auto Loader changes the order of columns when infering JSON schema (sorted lexicographically)

arthurburkhardt
New Contributor

We are using Auto Loader to read json files from S3 and ingest data into the bronze layer. But it seems auto loader struggles with schema inference and instead of preserving the order of columns from the JSON files, it sorts them lexicographically.

For example, if we have the following json

{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}

And the following code

import dlt

@dlt.table(table_properties={'quality': 'bronze'})
def my_table():
  return (
     spark.readStream.format('cloudFiles')
     .option('cloudFiles.format', 'json')
     .load('s3://my_bucket/my_table/')

)

It will create the following table:

colAcolBcolC
312
312
312

This is a really bad behavior, and it seems there is no option to preserve the order of columns. The only way to preserve the order of tables is to explicitly specify the schema, which defeats the purpose of schema inference.

Looking at Databricks demos, this seems to be the default behavior: https://notebooks.databricks.com/demos/auto-loader/01-Auto-loader-schema-evolution-Ingestion.html

Is there any way in Auto Loader to preserve the order of columns, like in most json-to-dataframe libraries?

2 REPLIES 2

cgrant
Databricks Employee
Databricks Employee

One alternative is to re-order the columns into the order you'd like using the ALTER COLUMN API

Taking your example, 

ALTER TABLE catalog.schema.table ALTER COLUMN colB FIRST;
ALTER TABLE catalog.schema.table ALTER COLUMN colC AFTER colB;

Sidhant07
Databricks Employee
Databricks Employee

Auto Loader's default behavior of sorting columns lexicographically during schema inference is indeed a limitation when preserving the original order of JSON fields is important. Unfortunately, there isn't a built-in option in Auto Loader to maintain the original column order from JSON files while using automatic schema inference.However, there are a few workarounds you can consider:

1. Explicitly Define the Schema

While this approach doesn't fully leverage Auto Loader's schema inference capabilities, it allows you to maintain control over the column order:

from pyspark.sql.types import StructType, StructField, IntegerType

schema = StructType([
StructField("colB", IntegerType(), True),
StructField("colC", IntegerType(), True),
StructField("colA", IntegerType(), True)
])

@Dlt.table(table_properties={'quality': 'bronze'})
def my_table():
return (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.schema(schema)
.load('s3://my_bucket/my_table/')
)

 

2. Use a Post-Processing Step

You can leverage Auto Loader's schema inference and then reorder the columns in a subsequent step:
python
from pyspark.sql.functions import col

@Dlt.table(table_properties={'quality': 'bronze'})
def my_table():
df = (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.load('s3://my_bucket/my_table/')
)

# Define the desired column order
desired_order = ["colB", "colC", "colA"]

# Reorder columns
return df.select([col(c) for c in desired_order])



Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group