- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
Hi @echol,
This is a common scenario when multiple team members work with Databricks Asset Bundles, and there are a few approaches to solve it cleanly.
THE ROOT CAUSE
When Staff A deploys a bundle, the jobs and other resources are created with Staff A as the IS_OWNER. By default, only the owner (or a workspace admin) can modify ownership or redeploy those resources. When Staff B tries to deploy the same bundle, the CLI attempts to update resources that Staff B does not own, resulting in the "permission denied" error.
The key distinction: run_as controls who the job runs as at execution time, but it does not change who owns or can redeploy the underlying resource.
RECOMMENDED SOLUTION: USE A SERVICE PRINCIPAL FOR DEPLOYMENTS
The most robust fix for multi-team setups is to deploy bundles using a shared service principal rather than individual user identities. This way, the service principal owns all resources, and any team member authenticating as that service principal can redeploy.
1. Create a service principal in your Databricks workspace for your infrastructure team.
2. Grant the service principal appropriate permissions on your workspace (e.g., ability to create/manage jobs, clusters, etc.).
3. Configure your bundle to use the service principal for both deployment and run_as:
bundle: name: my-bundle run_as: service_principal_name: "your-service-principal-application-id" permissions: - service_principal_name: "your-service-principal-application-id" level: CAN_MANAGE - group_name: "your-infrastructure-team-group" level: CAN_MANAGE targets: production: mode: production
4. Each team member authenticates as the service principal when deploying (via environment variables, a CI/CD pipeline, or a Databricks CLI profile configured with the SP credentials).
This is also the approach Databricks recommends for production mode bundles:
https://docs.databricks.com/aws/en/dev-tools/bundles/deployment-modes
CI/CD PIPELINE APPROACH
For teams that deploy frequently, the best practice is to automate deployments through a CI/CD pipeline (GitHub Actions, Azure DevOps, Jenkins, etc.) where the pipeline itself authenticates as the service principal. This removes the dependency on any individual user and ensures consistent ownership. Databricks supports workload identity federation for CI/CD authentication, which eliminates the need to manage secrets manually.
See the CI/CD integration guide:
https://docs.databricks.com/aws/en/dev-tools/bundles/ci-cd
ALTERNATIVE: GRANT CAN_MANAGE TO A GROUP
If switching to a service principal for deployment is not immediately feasible, you can grant CAN_MANAGE permission on the jobs to a shared group that includes all your infrastructure team members. Add this to your bundle configuration:
permissions: - group_name: "core-infrastructure-team" level: CAN_MANAGE
This allows anyone in the group to edit and manage the job, but note that the original owner still remains as the IS_OWNER. CAN_MANAGE grants the ability to edit job definitions, configuration, tasks, and permissions. However, for a truly clean multi-user deployment experience, the service principal approach is preferred because it avoids ownership ambiguity entirely.
See the permissions documentation:
https://docs.databricks.com/aws/en/dev-tools/bundles/permissions
TRANSFERRING EXISTING JOB OWNERSHIP
For jobs that have already been created by Staff A and need to be transferred, a workspace admin can change the job owner to the service principal. This can be done via the Jobs UI (Job Settings > Permissions > change the owner) or via the Jobs API. Once ownership is transferred to the service principal, subsequent deployments by anyone authenticating as that SP will succeed.
See job permissions documentation:
https://docs.databricks.com/aws/en/jobs/privileges
SUMMARY
For multi-team DAB setups, use a service principal as the deployment identity (ideally through a CI/CD pipeline). This ensures consistent resource ownership, eliminates "permission denied" errors across team members, and aligns with Databricks best practices for production deployments.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.