cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Best practice on how to create a configuration yaml files for each workspace environment based?

jeremy98
Contributor III

Hi Community,

My team and I are working on refactoring our DAB repository, and we’re considering creating a configuration folder based on our environments—Dev, Staging, and Production workspaces.

What would be a common and best practice for structuring these configuration files by environment? For example, organizing settings for different cluster types, job configurations, and other environment-specific parameters.

Any suggestions or recommendations?

6 REPLIES 6

dmytro_starov
New Contributor II

Heh, nobody answered in a month 😄 I have similar question. I see some guys store config data in SQL database, but it seems overcomplicated to me. I'm looking for better ways to do it, but having bunch of config files is questionable as well....

dimkanividimka

saurabh18cs
Valued Contributor III

how about diff yml file per environment lives within repo for each dataset/workflow.?

koji_kawamura
Databricks Employee
Databricks Employee

Hi @jeremy98 and all,

I agree with @saurabh18cs . Having configuration files for each deployment target is a very convenient and manageable solution. Since I couldn't find a plain example showing the project structure, I created one here. https://github.com/koji-kawamura-db/dab_targets_sample

 

The databricks.yml file can be straightforward. It just has the "include" setting. The base job configurations are defined in the resources/dab_targets_sample.job.yml. The targets dir contains dev.yml and prod.yml files that override task and cluster configurations.

include:
- resources/*.yml
- targets/*.yml  

Job configuration in the dev target:

koji_kawamura_0-1741835489656.png

In the prod target:

koji_kawamura_1-1741835706607.png

I hope this helps!

Hi @koji_kawamura  ,

Seems like you're talking about clusters and working environment configuration. But I guess @jeremy98 asks for job-related configs, like tables/columns names for spark queries, paths to source/target files, names of column which will be used for partitioning of resulting delta table etc. 

dimkanividimka

Hi dmy,
Yes more general not only cluster configurations! But, we have created a custom example where setting this and it's working fine :). Btw, thanks Koji! Thanks all 🙂

Hi @dmytro_starov 

@jeremy98 asked "organizing settings for different cluster types, job configurations, and other environment-specific parameters" so I provided the example showing how to change cluster configurations (I changed the number of worker nodes) and also task parameters based on environment ("dev" and "prod").

The job or task parameters can be used to specify environment-specific table/column names or source/target file paths if needed. I updated the example project to illustrate how to utilize these env-specific parameters from the notebook executed by the job. Please see the screenshot below as a quick reference.

koji_kawamura_0-1741858144835.png

In order to manage data organization structures across multiple environments, we can also (or I should say we should) utilize Unity Catalog. Separating catalogs to dev/staging/prod and changing such catalog names based on execution target env is common.

I hope the example covers the original @jeremy98 question, and also potentially cover what @dmytro_starov needs. If not please elaborate. Thanks!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now