cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Schema Evolution - Auto Loader for Avro format is not working as expected

venkat09
New Contributor III

* Reading Avro files from s3 and then writing to the delta table

* Ingested sample data of 10 files, which contain four columns, and it infers the schema automatically as expected

* Introducing a new file which contains a new column [foo] along with existing columns and stream failed and threw identified new field error, which is expected 

* Restarting the stream, add the new columns to the delta table 

* Introducing a new file which contains another new column [Foo, but only it differs by case compared to the previous new column] 

* Expected: stream should not fail and add that new column info into the **_rescued_data**

* Actual: stream failed to throw the below-given error message 

* com.databricks.sql.transaction.tahoe.DeltaAnalysisException: Found duplicate column(s) in the data to save: metadata

NOTE: I saw the option `readerCaseSensitive` in the document, but the explanation is unclear. I tried to set both false and true but faced the same issue. 

```

stream = (spark.readStream

  .format("cloudFiles")

  .option("cloudFiles.format", "avro")

  .option("cloudFiles.schemaLocation", bronzeCheckpoint)

  #.option("readerCaseSensitive", False)

  .load(rawDataSource)

  .writeStream

  .option("path", bronzeTable)

  .option("checkpointLocation", bronzeCheckpoint)

  .option("mergeSchema", True)

  .table(bronzeTableName)

)

```

My understanding from the document, if there are case mismatches in the column name, the column not that in the schema capture should be moved to _rescued_column. Please let me know if that s not the case. Thanks

1 REPLY 1

venkat09
New Contributor III

I am attaching the sample code notebook that helps to reproduce the issue.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group