For INSERT INTO โฆ SELECT in Databricks SQL, mapping is by position unless you use the BY NAME clause or an explicit column list; BY NAME matches columns by name (including nested structs) and ignores order.
Ref Doc - https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-dml-insert-into
INSERT INTO dev.spark_db.tbl_instax_sales BY NAME
SELECT
src_col_a AS target_col_a,
src_col_b AS target_col_b,
src_col_c AS target_col_c
FROM sales;
For df.write.saveAsTable append/overwrite, treat it as schema-on-write: ensure the DataFrameโs columns (names and types) align with the target to avoid analysis errors; use append to add and overwrite to replace data in the table.
target_cols = spark.table("dev.spark_db.tbl_instax_sales").columns
sales_df_aligned = sales_df_cleaned.selectExpr(
"src_col_a as target_col_a",
"src_col_b as target_col_b",
"src_col_c as target_col_c"
)
sales_df_aligned.write.mode("append").saveAsTable("dev.spark_db.tbl_instax_sales")
sales_df_aligned.write.mode("overwrite").saveAsTable("dev.spark_db.tbl_instax_sales")
There is no schema evolution clause for INSERT INTO; if you need schema evolution, use DataFrame write options or MERGE with schema evolution instead.
Ref Doc - https://docs.databricks.com/aws/en/delta/update-schema