Hi Everyone
I am using the default Databricks Asset Bundle. All is good except i can get stage dependt variables to work.
i have a bundle config file that looks like the below, where i have defined variables for each stage.
bundle:
name: xxxxx
include:
- resources/*.yml
targets:
# The 'dev' target, for development purposes. This target is the default.
dev:
variables:
env:
description: target environment for fetching secrets from the relevant keyvault scope
default: test
# We use 'mode: development' to indicate this is a personal development copy:
# - Deployed resources get prefixed with '[dev my_user_name]'
# - Any job schedules and triggers are paused by default
# - The 'development' mode is used for Delta Live Tables pipelines
mode: development
default: true
workspace:
host: adb-xxxxx.azuredatabricks.net
# The 'prod' target, used for production deployment.
prod:
variables:
env:
description: target environment for fetching secrets from the relevant keyvault scope
default: prod
# We use 'mode: production' to indicate this is a production deployment.
# Doing so enables strict verification of the settings below.
mode: production
workspace:
host: adb-xxxxx.azuredatabricks.net
# We always use /Users/xxx@xxx.dk for all resources to make sure we only have a single copy.
# If this path results in an error, please make sure you have a recent version of the CLI installed.
root_path: /Users/xxx@xxx.dk/.bundle/${bundle.name}/${bundle.target}
run_as:
# This runs as xxx@xxx.dk in production. We could also use a service principal here,
# see https://docs.databricks.com/dev-tools/bundles/permissions.html.
user_name: xxx@xxx.dk
however, when i run fx "databricks bundle deploy" i get the error "Error: variable env is not defined but is assigned a value"
i have tried to set the env var with --var="env=test" also
I try to call the env var from the base_parameters slot in following resource file .yml:
# The main job for xxxx.
resources:
jobs:
xxxxx:
name: xxxx
schedule:
# Run At 09:00:00am, on every Saturday, every month
quartz_cron_expression: '0 0 9 ? * SAT *'
timezone_id: Europe/Amsterdam
email_notifications:
on_failure:
- xxxx
tasks:
- task_key: notebook_task
job_cluster_key: job_cluster
notebook_task:
notebook_path: ../src/notebook.ipynb
base_parameters:
ENV: ${var.env}
job_clusters:
- job_cluster_key: job_cluster
new_cluster:
spark_version: 13.3.x-scala2.12
node_type_id: Standard_D3_v2
autoscale:
min_workers: 1
max_workers: 1
help would be much appreciated.
kind regards
Niels