Hello @szymon_dybczak
I use object storage within/from Azure.
I have solved it now by executing a workaround. So with azure data factory I was copying the files to the azure folder storage. But instead of giving the folder the name "yyyy-MM-ddTHH:mm:ss:fffK" I gave it the naming "yyyy-MM-ddTHH-mm-ss-fffK".
And then in databricks I use the python datetime.strptime functionality "%Y-%m-%dT%H-%M-%S-%fZ" so that I can later then show the latest folder and read it with the functionality:
- df = spark.read.option("multiLine", "true").json(f"/mnt/middleware/changerequests/1. ingest/{FolderName}/changerequests.json")
So not an ideal solution unfortunately but I was able to fix it by a workaround.
Btw, which object storage are you using? Because I am wondering why it is working for you and not for me... In the Hadoop link you shared I can't find anything about not working in Azure?