cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Autoloader creates columns not present in the source

ks1248
New Contributor III

I have been exploring Autoloader to ingest gzipped JSON files from an S3 source.

The notebook fails in the first run due to schema mismatch, after re-running the notebook, the schema evolves and the ingestion runs successfully.

On analysing the schema for the delta table created as a result of the ingestion, I found there are two new columns `id` and `optionsDefaults`.

These columns are not there in the original data, nor do they contain any value and are just nulls.

Is there something I might be missing out on...?

1 ACCEPTED SOLUTION

Accepted Solutions

ks1248
New Contributor III

Hi @Debayan Mukherjee​ , @Kaniz Fatma​ 

Thank you for replying to my question.

I was able to figure out the issue. I was creating the schema and checkpoint folders in the same path as the source location for the autoloader. This caused the schema to change every time the autoloader notebook ran as the source data now included schema and checkpoint metadata as well.

I fixed this by providing a location for schema and checkpoint different from the source location.

View solution in original post

2 REPLIES 2

Debayan
Databricks Employee
Databricks Employee

Hi, Could you please provide a screenshot (before and after) and also, if possible, notebook content?

ks1248
New Contributor III

Hi @Debayan Mukherjee​ , @Kaniz Fatma​ 

Thank you for replying to my question.

I was able to figure out the issue. I was creating the schema and checkpoint folders in the same path as the source location for the autoloader. This caused the schema to change every time the autoloader notebook ran as the source data now included schema and checkpoint metadata as well.

I fixed this by providing a location for schema and checkpoint different from the source location.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group