We are using Auto Loader to read json files from S3 and ingest data into the bronze layer. But it seems auto loader struggles with schema inference and instead of preserving the order of columns from the JSON files, it sorts them lexicographically.
For example, if we have the following json
{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}
{"colB": 1, "colC": 2, "colA": 3}
And the following code
import dlt
@dlt.table(table_properties={'quality': 'bronze'})
def my_table():
return (
spark.readStream.format('cloudFiles')
.option('cloudFiles.format', 'json')
.load('s3://my_bucket/my_table/')
)
It will create the following table:
This is a really bad behavior, and it seems there is no option to preserve the order of columns. The only way to preserve the order of tables is to explicitly specify the schema, which defeats the purpose of schema inference.
Looking at Databricks demos, this seems to be the default behavior: https://notebooks.databricks.com/demos/auto-loader/01-Auto-loader-schema-evolution-Ingestion.html
Is there any way in Auto Loader to preserve the order of columns, like in most json-to-dataframe libraries?