03-03-2026 11:16 AM
I used the same code with the classic cluster (RunTime 17.3LTS ML, with spark config: "spark.databricks.workspace.fileSystem.enabled true"), not able to access files in workspace with the following python code:
03-03-2026 01:59 PM
Hi @NW1000,
I think you are seeing a permissions/identity difference between the two compute types, not a path or runtime issue.
On serverless interactive, the cluster runs as you, so it inherits your workspace permissions and can see everything under
/Workspace/Users/xxx@xxxxxx.com/reporting/R/utils
os.listdir shows your R files.
On your classic cluster, the code is likely running under a different principal (for example, a shared/standard cluster, or a job cluster “run as” a service principal). That identity either:
In that case, the directory itself exists (so os.path.exists is True), but the listing returns an empty result for that principal.
Try the below.
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Regards,
03-03-2026 01:59 PM
Hi @NW1000,
I think you are seeing a permissions/identity difference between the two compute types, not a path or runtime issue.
On serverless interactive, the cluster runs as you, so it inherits your workspace permissions and can see everything under
/Workspace/Users/xxx@xxxxxx.com/reporting/R/utils
os.listdir shows your R files.
On your classic cluster, the code is likely running under a different principal (for example, a shared/standard cluster, or a job cluster “run as” a service principal). That identity either:
In that case, the directory itself exists (so os.path.exists is True), but the listing returns an empty result for that principal.
Try the below.
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Regards,
03-03-2026 03:29 PM
Did not realize despite the user is in the group which is assigned to the classic cluster, the workspace access is still not available. Thank you for your help!
03-08-2026 10:59 PM
Hi @NW1000,
This behavior comes down to how workspace file access and identity work differently between serverless compute and classic clusters.
SERVERLESS COMPUTE
Serverless interactive compute runs under your own identity. It inherits your workspace permissions directly, so os.listdir() on /Workspace/Users/you@example.com/... returns everything you personally have access to.
CLASSIC CLUSTERS AND ACCESS MODE
On a classic cluster, the effective identity depends on the cluster's access mode:
1. Single User (Assigned) mode: the cluster runs as the assigned user, and that user's permissions apply to workspace files. If the cluster is assigned to you, it works the same as serverless.
2. Shared mode (formerly "Shared" or "No Isolation Shared"): the cluster may run code under a different security context. In this mode, direct filesystem access to /Workspace paths can be restricted, and os.listdir() may return empty results even though os.path.exists() returns True. The directory is visible, but the listing is filtered by the effective principal's permissions.
3. No Isolation Shared mode (legacy): similar restrictions apply.
WHY os.path.exists() RETURNS TRUE BUT os.listdir() IS EMPTY
The /Workspace mount point is visible to all compute types, so the path itself resolves. However, the file listing is governed by workspace-level ACLs tied to the running identity. If the identity does not have CAN_READ on the individual files or the folder contents, the listing comes back empty.
HOW TO FIX THIS
Option 1: Use a Single User cluster assigned to your account. This ensures the cluster identity matches your workspace permissions. In the cluster configuration, set Access Mode to "Single User" and assign your user.
Option 2: Move the files to a Shared workspace folder (e.g., /Workspace/Shared/reporting/R/utils) and grant appropriate permissions to the group or users who need access. You can set folder-level permissions in the Workspace browser by right-clicking the folder and selecting "Permissions."
Option 3: If you need to use a Shared cluster, consider storing the files in a Unity Catalog Volume instead of the workspace filesystem. Volumes provide fine-grained access control that works consistently across all compute types:
spark.read.text("/Volumes/catalog/schema/volume_name/utils/aaa_helpers.R")
RELEVANT DOCUMENTATION
- Workspace files: https://docs.databricks.com/files/workspace.html
- Cluster access modes: https://docs.databricks.com/compute/configure.html#access-mode
- Unity Catalog Volumes: https://docs.databricks.com/volumes/index.html
- Workspace ACLs: https://docs.databricks.com/security/access-control/workspace-acl.html
Your Spark config setting (spark.databricks.workspace.fileSystem.enabled true) enables the workspace filesystem FUSE mount on classic clusters, which you already have. The issue is purely about the identity and permissions, not the mount itself.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.