03-27-2025 07:19 AM
Hi everyone,
We’re facing a strange issue when trying to access a Databricks Volume from a job that is triggered via the Databricks REST API (not via Workflows). These jobs are executed using container services, which may be relevant, perhaps due to isolation constraints that prevent access to certain Databricks-native features.
The job runs, but when we try to perform a basic file operation like:
with open("/Volumes/folder_example/file.txt", "r") as f:
data = f.read()We get the following error:
PermissionError: [Errno 1] Operation not permitted: '/Volumes/folder_example/file.txt'We increased the log level and got more detail in the traceback:
TaskException: Task in task_example failed: An error occurred while calling o424.load.
: com.databricks.backend.daemon.data.common.InvalidMountException: Error while using path /Volumes/folder_example/ for creating file system within mount at '/Volumes/folder_example/'.
at com.databricks.backend.daemon.data.common.InvalidMountException$.apply(DataMessages.scala:765)From the error message and behavior, we suspect this could be related to how container services isolate the job’s execution environment possibly preventing it from accessing Unity Catalog Volumes, since these mounts may not be available or reachable outside of a native Databricks execution context.
However, we haven’t found official documentation clearly explaining whether Databricks Volumes can be accessed in jobs triggered this way, or under which conditions access is denied.
We can access the same data directly from S3 using the instance profile without any issues. This is expected, since the S3 path is accessed directly via the instance profile credentials
• Are Volumes intentionally not accessible from container services?
• Is there any official documentation detailing execution contexts and their access to UC/Volumes/Workspace paths?
Thanks in advance! 🙂
Isi
04-08-2025 08:08 PM
Is the Volume mounted using unity catalog?
Do user/service principal used run_as job has required access on the volume?
04-12-2025 02:55 AM
Hello @rcdatabricks
Yes, its unity catalog volume and has permissions. I am able to access it from a notebook or a job but not using this container services...
Any idea?
Thanks,
Isi
04-21-2025 06:48 AM
@Isi Are you using databricks-sdk library to access this volumes?
example:
https://docs.databricks.com/aws/en/dev-tools/sdk-python#files-in-volumes:~:text=Catalog%20volume.-,P....
07-07-2025 04:02 AM
@Isi Did you find solution to this issue. I am facing the exact problem right now.
07-10-2025 01:32 PM
@rxj @Octavian1
No 😞 I didn`t but a volume is like a door to the storage, so we end reading the path directly with boto3
07-10-2025 06:25 AM
Check also whether the cluster used to run the job has the right access to the specific UC Volume.
11 hours ago
Databricks Volumes (especially Unity Catalog (UC) volumes) often have strict execution context requirements and typically expect the workload to run in Databricks-managed clusters or notebooks where the specialized file system and security context are present. Your description suggests a known friction point: container services, which may not fully replicate the Databricks runtime's native context, cause access to Volumes to fail—even if credentialed access to raw S3 works fine.
Yes, this is generally by design as of late 2025. Databricks Volumes and UC volumes are mounted using filesystem drivers and security controls that expect a Databricks-managed execution context. When jobs are run in externalized container services (like Docker/Podman containers spun up outside the Databricks control plane), the Volumes file system layer is often unavailable:
Native "/Volumes" and Unity Catalog paths depend on Databricks' FUSE/DBFS/VFS overlays.
These overlays are only present in Databricks-provisioned environments (clusters, serverless compute, or the interactive Databricks notebook context).
Externalized workloads via Databricks Container Services or custom drivers (like REST API-triggered containers or Kubernetes pods) typically do NOT have direct access to these overlays, leading to permission errors or mount failures.
Databricks documentation does clarify this restriction, but it is often scattered:
Unity Catalog and Volumes: The official Unity Catalog documentation notes that Volumes are only accessible from Databricks clusters with Unity Catalog enabled, and not from all external interfaces. Only Databricks-interactive or workflow clusters can resolve "/Volumes/" paths.
DBFS and REST API/Containers: The documentation also notes that paths like "/Volumes", "/mnt" (for legacy DBFS mounts), and related VFS overlays are not available in:
Jobs running on external custom Kubernetes clusters using Databricks Container Services.
Direct REST API containers, or any compute that runs outside the Databricks cluster control plane.
Recommended Approach: For workloads that need to access data both inside and outside Databricks, official best practice is to:
Use direct access to cloud object storage (e.g., S3 paths) for jobs that may run outside native Databricks compute contexts.
Avoid using "/Volumes" or "/mnt" mounts outside of Databricks-managed clusters.
Permissions: Even with Unity Catalog privileges and correct instance profile configuration, the FUSE driver and security context are missing in container service jobs; hence, access fails.
| Feature | Databricks Native Cluster | REST API Custom Container | External Container Service |
|---|---|---|---|
Access /Volumes |
Yes | No | No |
| Direct S3 access | Yes | Yes | Yes |
| Unity Catalog access | Yes | No | No |
Volumes and UC paths are intentionally unavailable in Databricks jobs executed via container services or externalized REST API-launched containers.
Official documentation for Unity Catalog and data access paths explicitly limits access to Databricks-managed clusters and does not support mounting within containers that lack the Databricks FUSE/VFS environment.
Direct S3 access remains available everywhere you have credentials, and is the officially recommended approach for hybrid workloads.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now