I've a DLT pipeline that processes messages from event grid. The schema of the message has two columns in different cases - "employee_id" and "employee_ID",
I tried setting spark.sql.caseSensitive to true in my DLT notebook as well in DLT configuration, but it didn't work. It works in normal pyspark notebook, however it fails in DLT.
Error:
terminated with exception: [DELTA_DUPLICATE_COLUMNS_FOUND] Found duplicate column(s) in the data to save: data.message.empdetail.employee_id SQLSTATE: XXKST