pyspark delta table schema evolution
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ07-21-2022 11:42 PM
I am using the schema evolution in the delta table and the code is written in databricks notebook.
df.write
.format("delta")
.mode("append")
.option("mergeSchema", "true")
.partitionBy("date")
.save(path)
But I still got the error below. Is it correct to define the schema and enable the mergeSchema at the same time?
AnalysisException: The specified schema does not match the existing schema at path.
== Specified ==
root
-- A: string (nullable = false)
-- B: string (nullable = true)
-- C: long (nullable = true)
== Existing ==
root
-- A: string (nullable = true)
-- B: string (nullable = true)
-- C: long (nullable = true)
== Differences==
- Field A is non-nullable in specified schema but nullable in existing schema.
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
Labels:
- Labels:
-
Databricks notebook
-
Delta
-
Delta table
-
Table
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ08-30-2022 02:04 PM
Hi @z yangโ Please provide the df creation code as well to understand the complete exception and scenario.

