cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Redeploy Databricks Asset Bundle created by others

echol
New Contributor II

Hi everyone,

Our team is using Databricks Asset Bundles (DAB) with a customized template to develop data pipelines. We have a core team that maintains the shared infrastructure and templates, and multiple product teams that use this template to develop and deploy their own pipelines.

One common use case is the following:

  • Staff A (from Team AA) develops a pipeline using the DAB template and deploys the bundle to the QA environment for testing.

  • Later, Staff B (from the core team) needs to apply some changes to the same bundle (for example, updating job configuration or infrastructure-related settings).

However, the job created by the bundle is owned by Staff A. When Staff B tries to redeploy the same bundle with modifications, they receive an error like:

Error: permission denied creating or updating job_XXXXX. For assistance, contact the owners of this project.

This appears to be a job permission issue. We have tried configuring a service principal for โ€œrun asโ€, but the error still occurs.

Since the core team needs to make this type of change frequently, itโ€™s not practical for us to constantly coordinate with the original job owner (Staff A) to redeploy bundles.

Is it possible for one user to redeploy or update a DAB that was initially deployed by another user?
If so, what is the recommended pattern or best practice for handling this in a multi-team setup?

Thanks!

6 REPLIES 6

pradeep_singh
Contributor III

Assuming the workflow is being deployed with dabs in production mode ,you can assign CAN_MANAGE permission on the workflow to a user group . anyone in this group can now deploy the workflow with dabs 

In higher environments you should be using a service principal and production mode to deploy your workflow and not individual users 

Thank You
Pradeep Singh - https://www.linkedin.com/in/dbxdev

echol
New Contributor II

Hi Pradeep, thank you!!

The DAB is actually deployed in development mode to our QA workspace. So I guess this might be the limitation? We have already assign the core team user group as CAN_MANAMGE on bundle level, and the use group has CAN_MANAGE permission to the job in DAB. But we still got the same error ("Error: permission denied creating or updating job_XXXXX.")

pradeep_singh
Contributor III

There is a purpose of development mode . its not a limitation . Its meant to make sure developers can test the changes individually . If you plan to have this deployed by multiple users you will have to deploy in production mode . 

Thank You
Pradeep Singh - https://www.linkedin.com/in/dbxdev

phipsi
New Contributor II

Hi, I have basically the same setup and question as @echol.
But how can developers test the changes individually in development mode, if they cant deploy their version of the bundle and run the jobs under their names?
Does every developer needs an own bundle configuration? That can not be the solution right?
The workflow with multiple developers in databricks is very confusing for me and i can barely find any resources on it.
Would be very thankful for some help on this topic ๐Ÿ™‚

pradeep_singh
Contributor III

development mode deployment gives you your own copy of the workflow . you dont need a separate configuration for each developer . 

Thank You
Pradeep Singh - https://www.linkedin.com/in/dbxdev

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @echol,

This is a common scenario when multiple team members work with Databricks Asset Bundles, and there are a few approaches to solve it cleanly.

THE ROOT CAUSE

When Staff A deploys a bundle, the jobs and other resources are created with Staff A as the IS_OWNER. By default, only the owner (or a workspace admin) can modify ownership or redeploy those resources. When Staff B tries to deploy the same bundle, the CLI attempts to update resources that Staff B does not own, resulting in the "permission denied" error.

The key distinction: run_as controls who the job runs as at execution time, but it does not change who owns or can redeploy the underlying resource.

RECOMMENDED SOLUTION: USE A SERVICE PRINCIPAL FOR DEPLOYMENTS

The most robust fix for multi-team setups is to deploy bundles using a shared service principal rather than individual user identities. This way, the service principal owns all resources, and any team member authenticating as that service principal can redeploy.

1. Create a service principal in your Databricks workspace for your infrastructure team.

2. Grant the service principal appropriate permissions on your workspace (e.g., ability to create/manage jobs, clusters, etc.).

3. Configure your bundle to use the service principal for both deployment and run_as:

 bundle:
 name: my-bundle

 run_as:
 service_principal_name: "your-service-principal-application-id"

 permissions:
 - service_principal_name: "your-service-principal-application-id"
   level: CAN_MANAGE
 - group_name: "your-infrastructure-team-group"
   level: CAN_MANAGE

 targets:
 production:
   mode: production

4. Each team member authenticates as the service principal when deploying (via environment variables, a CI/CD pipeline, or a Databricks CLI profile configured with the SP credentials).

This is also the approach Databricks recommends for production mode bundles:
https://docs.databricks.com/aws/en/dev-tools/bundles/deployment-modes

CI/CD PIPELINE APPROACH

For teams that deploy frequently, the best practice is to automate deployments through a CI/CD pipeline (GitHub Actions, Azure DevOps, Jenkins, etc.) where the pipeline itself authenticates as the service principal. This removes the dependency on any individual user and ensures consistent ownership. Databricks supports workload identity federation for CI/CD authentication, which eliminates the need to manage secrets manually.

See the CI/CD integration guide:
https://docs.databricks.com/aws/en/dev-tools/bundles/ci-cd

ALTERNATIVE: GRANT CAN_MANAGE TO A GROUP

If switching to a service principal for deployment is not immediately feasible, you can grant CAN_MANAGE permission on the jobs to a shared group that includes all your infrastructure team members. Add this to your bundle configuration:

 permissions:
 - group_name: "core-infrastructure-team"
   level: CAN_MANAGE

This allows anyone in the group to edit and manage the job, but note that the original owner still remains as the IS_OWNER. CAN_MANAGE grants the ability to edit job definitions, configuration, tasks, and permissions. However, for a truly clean multi-user deployment experience, the service principal approach is preferred because it avoids ownership ambiguity entirely.

See the permissions documentation:
https://docs.databricks.com/aws/en/dev-tools/bundles/permissions

TRANSFERRING EXISTING JOB OWNERSHIP

For jobs that have already been created by Staff A and need to be transferred, a workspace admin can change the job owner to the service principal. This can be done via the Jobs UI (Job Settings > Permissions > change the owner) or via the Jobs API. Once ownership is transferred to the service principal, subsequent deployments by anyone authenticating as that SP will succeed.

See job permissions documentation:
https://docs.databricks.com/aws/en/jobs/privileges

SUMMARY

For multi-team DAB setups, use a service principal as the deployment identity (ideally through a CI/CD pipeline). This ensures consistent resource ownership, eliminates "permission denied" errors across team members, and aligns with Databricks best practices for production deployments.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.