Auto Loader changes the order of columns when infering JSON schema (sorted lexicographically)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2024 03:00 AM
We are using Auto Loader to read json files from S3 and ingest data into the bronze layer. But it seems auto loader struggles with schema inference and instead of preserving the order of columns from the JSON files, it sorts them lexicographically.
For example, if we have the following json
{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}
And the following code
import dlt
@dlt.table(table_properties={'quality': 'bronze'})
def my_table():
return (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.load('s3://my_bucket/my_table/')
)
It will create the following table:
colA | colB | colC |
3 | 1 | 2 |
3 | 1 | 2 |
3 | 1 | 2 |
This is a really bad behavior, and it seems there is no option to preserve the order of columns. The only way to preserve the order of tables is to explicitly specify the schema, which defeats the purpose of schema inference.
Looking at Databricks demos, this seems to be the default behavior: https://notebooks.databricks.com/demos/auto-loader/01-Auto-loader-schema-evolution-Ingestion.html
Is there any way in Auto Loader to preserve the order of columns, like in most json-to-dataframe libraries?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-03-2024 10:51 PM
One alternative is to re-order the columns into the order you'd like using the ALTER COLUMN API
Taking your example,
ALTER TABLE catalog.schema.table ALTER COLUMN colB FIRST;
ALTER TABLE catalog.schema.table ALTER COLUMN colC AFTER colB;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-05-2024 10:46 PM
Auto Loader's default behavior of sorting columns lexicographically during schema inference is indeed a limitation when preserving the original order of JSON fields is important. Unfortunately, there isn't a built-in option in Auto Loader to maintain the original column order from JSON files while using automatic schema inference.However, there are a few workarounds you can consider:
1. Explicitly Define the Schema
While this approach doesn't fully leverage Auto Loader's schema inference capabilities, it allows you to maintain control over the column order:
from pyspark.sql.types import StructType, StructField, IntegerType
schema = StructType([
StructField("colB", IntegerType(), True),
StructField("colC", IntegerType(), True),
StructField("colA", IntegerType(), True)
])
@Dlt.table(table_properties={'quality': 'bronze'})
def my_table():
return (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.schema(schema)
.load('s3://my_bucket/my_table/')
)
2. Use a Post-Processing Step
You can leverage Auto Loader's schema inference and then reorder the columns in a subsequent step:
python
from pyspark.sql.functions import col
@Dlt.table(table_properties={'quality': 'bronze'})
def my_table():
df = (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.load('s3://my_bucket/my_table/')
)
# Define the desired column order
desired_order = ["colB", "colC", "colA"]
# Reorder columns
return df.select([col(c) for c in desired_order])

