This post is to set up Databricks workflow jobs as a CI/CD.
Below are the two essential components needed for a complete CI/CD setup of workflow jobs.
- Databricks Asset Bundles(DABs)
- AzureDevOps pipeline.
Databricks Asset Bundle ( From local terminal ) :
-
We need to use Databricks Asset Bundle(DABs) using databricks cli to deploy workflows.
Please note that Databricks Asset Bundles(DABs) are available in the latest version of databricks-cli (v0.205.0 and above). The legacy version will not work.
-
Run Command:
databricks bundle init
In the local terminal, select python project and provide some project name (workflow will be created with this name, ex. demo_wf) when prompted,
This will generate the folder structure of all the components needed for the workflow and create a folder with the project name, as shown below.
- Navigate to the project directory.
cd demo_wf
We need the notebooks in .ipynb format inside the src folder. These notebook files will be the respective tasks in the workflow. We can also create DLT pipelines and libraries as individual tasks.
- Inside the resources folder, we will have a YAML file (<project_name>_job.yml, i.e. demo_wf_job.yml) to define the task flow—sample task flow shown below.
tasks:
- task_key: task1
job_cluster_key: job_cluster
notebook_task:
notebook_path: ../src/notebook_1.ipynb
- task_key: task2
job_cluster_key: job_cluster
notebook_task:
notebook_path: ../src/notebook_2.ipynb
depends_on:
- task_key: task1
- After navigating to the project directory ( demo_wf ), Run the below command. Any syntax errors will be identified here.
databricks bundle validate
- Finally, run the command to deploy the workflow in the dev mode.
databricks bundle deploy -t dev
- Follow the above steps to deploy manually through the terminal. The same commands now need to be run from the build pipeline from Azure DevOps, and that will complete the CICD setup.
Databricks Asset Bundle using Azure DevOps pipeline:
Below are the steps to set up the Azure DevOps pipeline.
- We need an Azure virtual machine to run this as an agent for our DevOps pipeline, so create a virtual machine in Azure, Assign a Network security group, and set inbound rules to allow SSH (port 22) from your IP address so you can connect using SSH and do the setup on this virtual machine.
- We must install databrick-cli(latest version 0.212.4) on this virtual machine.
- Note: While creating the VM, we will be asked to download a .pem file (keep it safe as it is needed while connecting to the VM through an SSH).
-
The next step is to install the databricks-cli on this VM and configure this machine as an agent for your Azure agent pool,
-
Connect to VM using the command:
ssh -i <path_to_pem>/<file_name>.pem <username>@<hostname>
(If the inbound rules created in step 1 will allow SSH from your IP address, then it will connect successfully)
-
Install the databricks-cli using the below command.
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sudo sh
-
If you get any errors concerning unzip, then please install unzip using the below commands and re-run the above curl command:
sudo apt update -y
sudo apt install unzip -y
- Set up the VM to run as an agent in the Azure agent pool following the steps below.
- The next step is to create azure-pipelines.yml(DevOps pipeline). Yaml should look like the one below.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- main
pool: my-demo-pool
steps:
- script: echo "Hello, world!"
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
- task: Bash@3
inputs:
targetType: 'inline'
script: |
# Write your commands here
echo 'Hello world'
touch ~/.databrickscfg
echo "[DEFAULT]" > ~/.databrickscfg
echo "host = <workspace_host_url>" >> ~/.databrickscfg
echo "azure_workspace_resource_id = <Azure_sp_resource_id>" >> ~/.databrickscfg
echo "azure_tenant_id = <tenant_id>" >> ~/.databrickscfg
echo "azure_client_id = <spn_client_id>" >> ~/.databrickscfg
echo "azure_client_secret =<client_secret>" >> ~/.databrickscfg
cat ~/.databrickscfg
databricks bundle validate
databricks bundle deploy -t dev
After all these setups are done, To ensure our CI/CD is working as expected, the VM Agent should be up and running under the Agents panel (Project settings > Agent pools > Agents tab), and the folder structure in the main branch of Azure DevOps should look like below.
Any changes to the Azure DevOps main branch should be deployed/reflected in the Workflow jobs of your databricks workspace.
Sashank Kotta