Best practice on how to create a configuration yaml files for each workspace environment based?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2025 05:39 AM
Hi Community,
My team and I are working on refactoring our DAB repository, and we’re considering creating a configuration folder based on our environments—Dev, Staging, and Production workspaces.
What would be a common and best practice for structuring these configuration files by environment? For example, organizing settings for different cluster types, job configurations, and other environment-specific parameters.
Any suggestions or recommendations?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Heh, nobody answered in a month 😄 I have similar question. I see some guys store config data in SQL database, but it seems overcomplicated to me. I'm looking for better ways to do it, but having bunch of config files is questionable as well....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
how about diff yml file per environment lives within repo for each dataset/workflow.?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi @jeremy98 and all,
I agree with @saurabh18cs . Having configuration files for each deployment target is a very convenient and manageable solution. Since I couldn't find a plain example showing the project structure, I created one here. https://github.com/koji-kawamura-db/dab_targets_sample
The databricks.yml file can be straightforward. It just has the "include" setting. The base job configurations are defined in the resources/dab_targets_sample.job.yml. The targets dir contains dev.yml and prod.yml files that override task and cluster configurations.
include:
- resources/*.yml
- targets/*.yml
Job configuration in the dev target:
In the prod target:
I hope this helps!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi @koji_kawamura ,
Seems like you're talking about clusters and working environment configuration. But I guess @jeremy98 asks for job-related configs, like tables/columns names for spark queries, paths to source/target files, names of column which will be used for partitioning of resulting delta table etc.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi dmy,
Yes more general not only cluster configurations! But, we have created a custom example where setting this and it's working fine :). Btw, thanks Koji! Thanks all 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
@jeremy98 asked "organizing settings for different cluster types, job configurations, and other environment-specific parameters" so I provided the example showing how to change cluster configurations (I changed the number of worker nodes) and also task parameters based on environment ("dev" and "prod").
The job or task parameters can be used to specify environment-specific table/column names or source/target file paths if needed. I updated the example project to illustrate how to utilize these env-specific parameters from the notebook executed by the job. Please see the screenshot below as a quick reference.
In order to manage data organization structures across multiple environments, we can also (or I should say we should) utilize Unity Catalog. Separating catalogs to dev/staging/prod and changing such catalog names based on execution target env is common.
I hope the example covers the original @jeremy98 question, and also potentially cover what @dmytro_starov needs. If not please elaborate. Thanks!

