cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Databricks Asset bundle

Aria
New Contributor III

Hi,

I am new to databricks, We are trying to use Databricks asset bundles for code deployment .I have spect a lot of time but still so many things are not clear to me.

Can we change the target path of the notebooks deployed from /shared/.bundle/* to something like a shared folder in workspace or a repos in higher environment?

All the examples are to deploy jobs, delta live tables etc. How can I deploy a simple notebook without any job.

 

#databricksassetbundle #asset #deployment #bundle #CICD

 

2 REPLIES 2

Aria
New Contributor III

Thank you @Retired_mod for responding. 

I have tried to use a custom target also.I am using Databricks asset bundle from my local machine and have not configured it in azure devops.

Lets say I use the below custom path in target.

 root_path: /Shared/Admin
I get below folder created under /Shared/Admin after deployment and my notebook is placed in resources folder in files as in my visual studio code its under resources folder. I provided a nootbook path but that did not help.Can I just deploy my notebook inside Admin folder?
 
Aria_0-1702047979208.png

 

 

 

 

 

ะ•mil
New Contributor III

Hi @Retired_mod,

Thank you for your post, I thought it will solve my issues too, however after reading your suggestion it was nothing new for me, because l already have done exactly that.. Here is what I have dome so you or anyone can replicate it:

1. I used databricks bundle init to create a bundle with default python template 
2. I modified the generated databicks.yaml to have my custom target names a per your post.

When I run databricks bundle deploy -t my_env I have set the Repo main branch for both production an development mode environments. I expect when deploying the code from the main branch repo to be used however I can not get that working...

Here is the error I get:

databricks bundle deploy -t dev_001 --var="shared_autoscaling_cluster_id=****"
Starting upload of bundle files
Error: /Repos/Actuarial/CLUKDataPlatform.Bulkmarker.Notebooks/files does not exist; please create it first

Here is my bundle yaml for the custom target:
targets:
    mode: production
    git:
      branch: main
    default: true
    workspace:
      root_path: /Repos/MyProject/AzDO.Repo.Notebooks

Here is my resource basic notebook yaml:
resources:
  jobs:
    A_basic_inputs:
      name: A_basic_inputs
      deployment_config:
        no_package: true
      git_source:
        git_provider: azureDevOpsServices
        git_branch: main
      tasks:
        - task_key: A_basic_inputs
          notebook_task:
            notebook_path: subfolder/A_basic_inputs.py
            source: WORKSPACE
          existing_cluster_id: ${var.shared_autoscaling_cluster_id}

Please, please can someone tell me what is wrong and how do I configure the yaml to point to the Repo?
Any pointers to working code example or documentation/schema yaml reference is much appreciated.

Thanks!

Emil

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group