How do i use copy into command to load 200+ tables with 50+ columns into a delta lake table with predefined schema. I am looking for a more generic approach to be handled in pyspark code.
I am aware that we can pass the column expression into the select clause but passing column name into the select clause seems to be more tedious task.
any help over this is really appreciated