03-12-2026 07:39 AM
Hi everyone,
We are running into a strange issue when running notebooks on Databricks job clusters using DBR 18. It looks like the Workspace folder is mounted, but the .py file inside cannot be read immediately. I wanted to check if anyone else has experienced this or knows a recommended workaround.
The file is located here:
What we observe:
When the job cluster starts:
The output looks something like:
So it looks like the folder is mounted but the file contents are not available yet.
Additional notes:
Questions:
We would really appreciate any guidance or best practices.
03-13-2026 10:54 PM
The error might be because of delay in workspace files being accessible .
The /Workspace mount point appears quickly, but the FUSE daemon may still be initializing auth, metadata, and connections to the workspace storage account.
FUSE = Filesystem in Userspace: a Linux mechanism where a user‑space daemon implements a filesystem that the kernel exposes as a normal mount point.On Databricks, paths like /Workspace/... (workspace files) and /Volumes/... (Unity Catalog volumes) are exposed via a FUSE layer .
3 weeks ago
+1 to @pradeep_singh
The Workspace FUSE (WSFS) daemons use ports 1015, 1017, and 1021 for communication between the driver and the executor. NFS tooling (hardcoded in glibc) can race with these ports during cluster startup, causing FUSE daemons to fail to bind. This explains the intermittent nature, sometimes the port race doesn't happen and it works fine.
On interactive clusters, the driver accesses /Workspace via a local FUSE mount. On multi-node job clusters, executors must RPC to the driver over those ports (a fundamentally different code path).
Check your VPC security group rules to ensure all TCP ports are open between nodes in the same security group (if you are using a managed VPC).
03-13-2026 10:50 PM
Can you try this method for reading workspace files .
https://docs.databricks.com/aws/en/files/workspace-interact
If you can , use git folders instead of workspace files fi above method doesn't work for some reason.
03-13-2026 10:54 PM
The error might be because of delay in workspace files being accessible .
The /Workspace mount point appears quickly, but the FUSE daemon may still be initializing auth, metadata, and connections to the workspace storage account.
FUSE = Filesystem in Userspace: a Linux mechanism where a user‑space daemon implements a filesystem that the kernel exposes as a normal mount point.On Databricks, paths like /Workspace/... (workspace files) and /Volumes/... (Unity Catalog volumes) are exposed via a FUSE layer .
3 weeks ago
+1 to @pradeep_singh
The Workspace FUSE (WSFS) daemons use ports 1015, 1017, and 1021 for communication between the driver and the executor. NFS tooling (hardcoded in glibc) can race with these ports during cluster startup, causing FUSE daemons to fail to bind. This explains the intermittent nature, sometimes the port race doesn't happen and it works fine.
On interactive clusters, the driver accesses /Workspace via a local FUSE mount. On multi-node job clusters, executors must RPC to the driver over those ports (a fundamentally different code path).
Check your VPC security group rules to ensure all TCP ports are open between nodes in the same security group (if you are using a managed VPC).