CICD for databricks workflow jobs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-14-2024 05:17 AM
This post is to set up Databricks workflow jobs as a CI/CD.
Below are the two essential components needed for a complete CI/CD setup of workflow jobs.
- Databricks Asset Bundles(DABs)
- AzureDevOps pipeline.
Databricks Asset Bundle ( From local terminal ) :
-
We need to use Databricks Asset Bundle(DABs) using databricks cli to deploy workflows.
Please note that Databricks Asset Bundles(DABs) are available in the latest version of databricks-cli (v0.205.0 and above). The legacy version will not work.
-
Run Command:
databricks bundle init
In the local terminal, select python project and provide some project name (workflow will be created with this name, ex. demo_wf) when prompted,
This will generate the folder structure of all the components needed for the workflow and create a folder with the project name, as shown below.
- Navigate to the project directory.
We need the notebooks in .ipynb format inside the src folder. These notebook files will be the respective tasks in the workflow. We can also create DLT pipelines and libraries as individual tasks.cd demo_wf
- Inside the resources folder, we will have a YAML file (<project_name>_job.yml, i.e. demo_wf_job.yml) to define the task flow—sample task flow shown below.
tasks: - task_key: task1 job_cluster_key: job_cluster notebook_task: notebook_path: ../src/notebook_1.ipynb - task_key: task2 job_cluster_key: job_cluster notebook_task: notebook_path: ../src/notebook_2.ipynb depends_on: - task_key: task1
- After navigating to the project directory ( demo_wf ), Run the below command. Any syntax errors will be identified here.
databricks bundle validate
- Finally, run the command to deploy the workflow in the dev mode.
databricks bundle deploy -t dev
-
- Follow the above steps to deploy manually through the terminal. The same commands now need to be run from the build pipeline from Azure DevOps, and that will complete the CICD setup.
Databricks Asset Bundle using Azure DevOps pipeline:
Below are the steps to set up the Azure DevOps pipeline.
- We need an Azure virtual machine to run this as an agent for our DevOps pipeline, so create a virtual machine in Azure, Assign a Network security group, and set inbound rules to allow SSH (port 22) from your IP address so you can connect using SSH and do the setup on this virtual machine.
- We must install databrick-cli(latest version 0.212.4) on this virtual machine.
- Note: While creating the VM, we will be asked to download a .pem file (keep it safe as it is needed while connecting to the VM through an SSH).
-
The next step is to install the databricks-cli on this VM and configure this machine as an agent for your Azure agent pool,
-
Connect to VM using the command:
ssh -i <path_to_pem>/<file_name>.pem <username>@<hostname>
(If the inbound rules created in step 1 will allow SSH from your IP address, then it will connect successfully)
-
Install the databricks-cli using the below command.
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sudo sh
-
If you get any errors concerning unzip, then please install unzip using the below commands and re-run the above curl command:
sudo apt update -y sudo apt install unzip -y
-
- Set up the VM to run as an agent in the Azure agent pool following the steps below.
-
Create a Self-hosted Agent in Azure DevOps.
To create a self-hosted agent, go to Project Settings (Bottom left) and select the Agent pools option under the Pipelines section. Press the Add pool button and configure the agent:
- Select the Pool type as Self-hosted
- Add a descriptive pool Name (in this example
my-demo-pool
) - Check the Pipeline permissions box so you do not need to grant permission manually
- Click on the Create button
- For more details, refer to the link: Create or manage agent pool
- Now, navigate to the freshly created agent pool. On the top-right corner, press the New Agent button. You can create Windows, macOS, and Linux agents. Based on your VM, select the appropriate OS, then follow the instructions; in our case, we used an Ubuntu image so that it will be a Linux agent:
-
- Connect to the VM using the SSH.
- Extract the archive: Download the agent then extract it to a folder using the below linux commands in the VM terminal.
mkdir myagent #Create a directory named myagent cd myagent #Navigate to that agent. wget https://vstsagentpackage.azureedge.net/agent/3.236.1/vsts-agent-linux-x64-3.236.1.tar.gz #Download the linux agent zip from the link given in the instructions using linux tar zxvf ~/Downloads/vsts-agent-linux-x64-3.236.1.tar.gz #Unzip the agent file from the downloaded zip
- Configure the agent: Run the config scripts in the VM terminal by running.
Answer the questions as prompted../config.sh
- Server URL: Copy and paste the organization URL, which looks like the following: https://dev.azure.com/<my-organization-name>
- Personal Access Token (PAT): Go to the Personal Access Tokens option under the User Settings icon. Ensure you generate a PAT with Read & manage access to the Agent pools.
- Agent pool name: The newly created pool, which is the
my-demo-pool
in our case - Agent Name: Give a meaningful name or stay with the default
- Work folder: Press enter for the default
- Agent as Service: Press enter to use the default.
- Run the agent by executing the run script.
./run.sh
- Once done, you can see that the Agent is up and running under the Agents panel. The self-hosted agent is connected to Azure DevOps and listens for new jobs.
- For more details, refer to the link: Self-hosted agent
-
-
- The next step is to create azure-pipelines.yml(DevOps pipeline). Yaml should look like the one below.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- main
pool: my-demo-pool
steps:
- script: echo "Hello, world!"
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
- task: Bash@3
inputs:
targetType: 'inline'
script: |
# Write your commands here
echo 'Hello world'
touch ~/.databrickscfg
echo "[DEFAULT]" > ~/.databrickscfg
echo "host = <workspace_host_url>" >> ~/.databrickscfg
echo "azure_workspace_resource_id = <Azure_sp_resource_id>" >> ~/.databrickscfg
echo "azure_tenant_id = <tenant_id>" >> ~/.databrickscfg
echo "azure_client_id = <spn_client_id>" >> ~/.databrickscfg
echo "azure_client_secret =<client_secret>" >> ~/.databrickscfg
cat ~/.databrickscfg
databricks bundle validate
databricks bundle deploy -t dev
After all these setups are done, To ensure our CI/CD is working as expected, the VM Agent should be up and running under the Agents panel (Project settings > Agent pools > Agents tab), and the folder structure in the main branch of Azure DevOps should look like below.
Any changes to the Azure DevOps main branch should be deployed/reflected in the Workflow jobs of your databricks workspace.

