As mentioned, the code runs just fine on an all-purpose cluster, therefore special characters can't be the problem.
But i checked this further by making two versions of the file:
- a regular CSV with comma as separator and dot as decimal separator,
- the identical file but โEuropean-styleโ with semicolon as separator and comma as decimal separator.
The regular CSV file works fine in a DLT pipeline, therefore there are no invalid characters in the column names
If I donโt use .option("delimiter", ";"), then attempting to read the European-style CSV file throws the mentioned error both when run on an all-purpose cluster and when run in a DLT pipeline (as expected, because semicolons are invalid in column names).
When I specify .option("delimiter", ";"), the code will run on an all-purpose cluster but will throw the mentioned error when run in a DLT pipeline. To me that seems to indicate that .option("delimiter", ";") is for some reason ignored when run via DLT pipelines. The special characters indicated by the error message must be the semicolons, as there are no other special characters in the first row.
Can anyone reproduce this?