Hello,
Let say we have an empty table S that represents the schema we want to keep
We have another table T partionned by column A with a schema that depends on the file we have load into. Say :
Now to make T having the same schema as S I do :
SET spark.sql.sources.partitionOverwriteMode=dynamic;
CREATE OR REPLACE TABLE T PARTITIONED BY (A) as SELECT * FROM S WHERE false;
and the result is, as I wish :
A | B | C | D | E |
1 | b1 | c1 | null | null |
2 | b2 | c2 | null | null |
Good. But the fact is I didn't see we can do something like that in databricks doc. Even worse it is said that overwrite schema option (pyspark option I guess because I don't succeed to use this option with SQL) and dynamic partition don't work together.
So here's the question : is the behavior described above a bug or a feature ?
Ps : runtime si 16.3