Hi Team, I am facing an issue, i have a json file which is around 700kb and it contains only 1 record, so after reading the data and flattening the file the record is now 620 million. Now while i am writing the dataframe into delta lake it is taking 24 hours to load the data. Could you please suggest a way where i can optimize and decrease the write time. also we are trying using VARIANT data type but stuck in flattening as normal flattening function made for json/struct is not working.