Hi @aniket07
With Serverless compute, Spark uses lazy evaluation and only checks if the path exists when you perform an action (like display()), so the error appears then. On the other hand, in All-Purpose clusters, Spark checks the path immediately when you create the DataFrame, so you see the error right away.
This difference is due to how each environment handles path validation and when they access storage.