<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Transitioning from ADF to Databricks Workflows: Best Practices in a Multi-Workspace (dev-prod) in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155730#M54298</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/219756"&gt;@Darshan137&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;Few things I will add to&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/2230"&gt;@Lu_Wang_ENB_DBX&lt;/a&gt;&amp;nbsp; answer that I did on a similar project.&lt;BR /&gt;If ADF currently passes values such as environment, run date, catalog, schema, or business domain, define a clear parameter contract in Lakeflow Jobs because DBKS supports job parameters, task parameters, dynamic value references, If/else conditions, and task values so this can replace many ADF variable or expression patterns.&amp;nbsp;&lt;/P&gt;&lt;P&gt;And do not only think about the SP used by CI/CD, you also decide which identity the jobs actually run as. With bundles, run_as can be configured separately from the deployment identity which is useful for production governance and&amp;nbsp; UC access control.&lt;/P&gt;&lt;P&gt;One of the things I struggled with is concurrency and scheduler ownership so I advise you during migration, set the DBKS job concurrency ioften max_concurrent_runs: 1 for batch pipelines to avoid overlapping runs and verify that each use case has only one active scheduler: either ADF or Databricks not both.&lt;/P&gt;&lt;P&gt;If you want also to keep modular bundles a&amp;nbsp;good structure is one root databricks.yml, shared variables,clusters, permissions and separate YAML files per use case under resources/jobs/ or resources/pipelines/ because bundles support resource definitions, variables, substitutions, and reusable config, so this avoids one huge YAML file while still keeping centralized standards.&lt;/P&gt;&lt;P&gt;Do not assume "zero notebook changes" (from a personal experience) until imports and parameters are tested and if notebooks already use dbutils.widgets.get() for ADF parameters the migration can be close to zero code change. But if notebooks depend on workspace relative imports or repo paths, moving to wheels may require validating that the package name and import paths stay identical.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 28 Apr 2026 17:58:18 GMT</pubDate>
    <dc:creator>amirabedhiafi</dc:creator>
    <dc:date>2026-04-28T17:58:18Z</dc:date>
    <item>
      <title>Transitioning from ADF to Databricks Workflows: Best Practices in a Multi-Workspace (dev-prod)</title>
      <link>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155571#M54276</link>
      <description>&lt;P&gt;Hi Community,&lt;/P&gt;&lt;P&gt;We have a data processing framework running on Azure Databricks with Unity Catalog, and we're evaluating options to consolidate our orchestration entirely within the Databricks ecosystem.&lt;/P&gt;&lt;P&gt;CURRENT ARCHITECTURE:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;~20 use cases, each containing 3-6 Python notebooks organized by business domain&lt;/LI&gt;&lt;LI&gt;A shared Python utility package (with&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;init&lt;/STRONG&gt;.py) used across all use cases&lt;/LI&gt;&lt;LI&gt;Two Databricks workspaces: Development and Production&lt;/LI&gt;&lt;LI&gt;Unity Catalog for data governance and storage&lt;/LI&gt;&lt;LI&gt;Azure Data Factory for orchestrating notebook execution (task ordering, dependencies)&lt;/LI&gt;&lt;LI&gt;Azure DevOps CI/CD pipelines (one per use case) deploying notebooks to workspaces via Terraform templates&lt;/LI&gt;&lt;LI&gt;Environment-specific configs (Key Vault names, service connections, catalog references) managed through ADO variable groups and YAML templates&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;WHAT WE WANT TO ACHIEVE:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Replace ADF orchestration with native Databricks orchestration (Lakeflow Jobs / Pipelines)&lt;/LI&gt;&lt;LI&gt;Manage environment-specific parameters (dev/prod catalog names, Key Vault, etc.) cleanly across workspaces&lt;/LI&gt;&lt;LI&gt;Keep our shared Python utility package working across all use cases without duplication&lt;/LI&gt;&lt;LI&gt;Zero changes to existing notebook code&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;QUESTIONS:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;P&gt;Orchestration: What is the recommended Databricks-native approach to replace ADF for orchestrating notebook execution with task dependencies? We need both sequential and parallel task support.&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Project structure: With ~20 use cases, what is the recommended way to organize job/pipeline definitions? One monolithic config vs. modular per-use-case definitions?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Shared library code: Our notebooks import from a shared Python package. What is the best way to handle this - sync the entire repo, or package it as a wheel?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Cross-workspace promotion: For promoting from dev to prod workspace, what authentication method works best - Service Principal with OAuth (M2M) or PAT tokens? Any Unity Catalog permission considerations?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;CI/CD: We currently use Azure DevOps plus Terraform for deploying notebook code and job definitions to both workspaces. For those who have made a similar migration - does it make sense to replace Azure DevOps with a Databricks-native deployment approach, or do most teams keep an external CI/CD tool alongside Databricks orchestration?&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;Incremental migration: Can we migrate one use case at a time while others still run via the legacy ADF setup, without conflicts?&lt;/P&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Any real-world experience, recommended approaches, or reference architectures would be very helpful. Is there any tutorial available for it then please provide the link also.&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Mon, 27 Apr 2026 13:38:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155571#M54276</guid>
      <dc:creator>Darshan137</dc:creator>
      <dc:date>2026-04-27T13:38:53Z</dc:date>
    </item>
    <item>
      <title>Re: Transitioning from ADF to Databricks Workflows: Best Practices in a Multi-Workspace (dev-prod)</title>
      <link>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155697#M54296</link>
      <description>&lt;P&gt;Answers to your questions.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Orchestration (replace ADF)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use &lt;STRONG&gt;Lakeflow Jobs (Databricks Jobs)&lt;/STRONG&gt; as the primary orchestrator: one job per use case with a task graph (notebook / SQL / pipeline tasks) to express both &lt;STRONG&gt;sequential and parallel&lt;/STRONG&gt; branches, retries, timeouts, and alerts.&lt;/LI&gt;
&lt;LI&gt;For ELT-heavy flows, define &lt;STRONG&gt;Lakeflow Spark Declarative Pipelines&lt;/STRONG&gt; for the data pipeline itself, and call them from Lakeflow Jobs for control-flow and scheduling.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Project structure (20 use cases + env-specific params)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Recommended: &lt;STRONG&gt;one repo&lt;/STRONG&gt; with &lt;STRONG&gt;modular bundle configs per use case&lt;/STRONG&gt;, e.g. &lt;CODE&gt;resources/jobs/usecase_X.yml&lt;/CODE&gt; and (optionally) &lt;CODE&gt;resources/pipelines/usecase_X.yml&lt;/CODE&gt;, plus shared cluster definitions and variables.&lt;/LI&gt;
&lt;LI&gt;Use &lt;STRONG&gt;Declarative Automation Bundles targets + variables&lt;/STRONG&gt; for environment-specific values (dev/prod catalogs, Key Vault/secret scopes, workspace URLs) instead of duplicating YAML: override only what changes per target (&lt;CODE&gt;dev&lt;/CODE&gt;, &lt;CODE&gt;prod&lt;/CODE&gt;).&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Shared library code (Python utils)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Best practice is to &lt;STRONG&gt;package the shared utils as a wheel&lt;/STRONG&gt; (built in CI), store it in an artifact feed or UC volume, and reference it as a job/pipeline &lt;STRONG&gt;library dependency&lt;/STRONG&gt;; notebook imports stay the same, and all use cases share the same versioned package.&lt;/LI&gt;
&lt;LI&gt;Repo-syncing the whole codebase and importing via workspace-relative paths works but scales worse; prefer wheels for anything you run in prod or across multiple workspaces.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Cross-workspace promotion (dev → prod)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use an &lt;STRONG&gt;Azure AD Service Principal with OAuth workload identity federation&lt;/STRONG&gt; for the Databricks CLI / Bundles; this is the recommended, most secure CI/CD auth pattern and avoids long-lived PATs.&lt;/LI&gt;
&lt;LI&gt;Treat the SP as a first-class principal in &lt;STRONG&gt;Unity Catalog&lt;/STRONG&gt;: grant it workspace access plus the required &lt;CODE&gt;USE CATALOG&lt;/CODE&gt;, &lt;CODE&gt;USE SCHEMA&lt;/CODE&gt;, and table privileges in each environment; many teams use &lt;STRONG&gt;separate targets and (optionally) separate SPs&lt;/STRONG&gt; for dev vs prod.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;CI/CD (Azure DevOps, Terraform, Databricks-native)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Most teams &lt;STRONG&gt;keep Azure DevOps (or GitHub Actions/Jenkins) as their CI/CD engine&lt;/STRONG&gt; and introduce &lt;STRONG&gt;Declarative Automation Bundles&lt;/STRONG&gt; for Databricks-side IaC (jobs, pipelines, clusters, permissions); ADO just runs &lt;CODE&gt;databricks bundle validate/deploy --target=dev|prod&lt;/CODE&gt; steps.&lt;/LI&gt;
&lt;LI&gt;Terraform remains useful for &lt;STRONG&gt;workspace-level infra&lt;/STRONG&gt; (workspaces, networks, storage, UC metastore), while Bundles manage &lt;STRONG&gt;workloads and configs&lt;/STRONG&gt; (jobs/pipelines/dashboards/etc.) in the same repo as your notebooks and Python code.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Incremental migration (ADF → Lakeflow Jobs)&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Yes: you can &lt;STRONG&gt;migrate one use case at a time&lt;/STRONG&gt; by creating the equivalent Lakeflow Job/Pipeline, validating it in dev, then switching only that use case’s schedule from ADF to Databricks; the rest continue to run in ADF without interference.&lt;/LI&gt;
&lt;LI&gt;Just ensure each pipeline has &lt;STRONG&gt;one active scheduler&lt;/STRONG&gt; (disable or pause the corresponding ADF pipeline once the Databricks job is live) and keep all storage/UC references identical so data remains consistent.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;HR /&gt;
&lt;P&gt;&lt;STRONG&gt;Relevant Azure Databricks docs:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;CI/CD on Azure Databricks:&lt;/STRONG&gt; high-level patterns + tool choices&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;What are Declarative Automation Bundles?&lt;/STRONG&gt; (core to organizing jobs/pipelines + envs)&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Tutorial – Develop a job with Declarative Automation Bundles:&lt;/STRONG&gt;&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/jobs-tutorial" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/jobs-tutorial&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Tutorial – Develop pipelines with Declarative Automation Bundles:&lt;/STRONG&gt;&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/pipelines-tutorial" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/pipelines-tutorial&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;CI/CD with Azure DevOps on Azure Databricks:&lt;/STRONG&gt;&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/azure-devops" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/azure-devops&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 28 Apr 2026 15:48:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155697#M54296</guid>
      <dc:creator>Lu_Wang_ENB_DBX</dc:creator>
      <dc:date>2026-04-28T15:48:57Z</dc:date>
    </item>
    <item>
      <title>Re: Transitioning from ADF to Databricks Workflows: Best Practices in a Multi-Workspace (dev-prod)</title>
      <link>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155730#M54298</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/219756"&gt;@Darshan137&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;Few things I will add to&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/2230"&gt;@Lu_Wang_ENB_DBX&lt;/a&gt;&amp;nbsp; answer that I did on a similar project.&lt;BR /&gt;If ADF currently passes values such as environment, run date, catalog, schema, or business domain, define a clear parameter contract in Lakeflow Jobs because DBKS supports job parameters, task parameters, dynamic value references, If/else conditions, and task values so this can replace many ADF variable or expression patterns.&amp;nbsp;&lt;/P&gt;&lt;P&gt;And do not only think about the SP used by CI/CD, you also decide which identity the jobs actually run as. With bundles, run_as can be configured separately from the deployment identity which is useful for production governance and&amp;nbsp; UC access control.&lt;/P&gt;&lt;P&gt;One of the things I struggled with is concurrency and scheduler ownership so I advise you during migration, set the DBKS job concurrency ioften max_concurrent_runs: 1 for batch pipelines to avoid overlapping runs and verify that each use case has only one active scheduler: either ADF or Databricks not both.&lt;/P&gt;&lt;P&gt;If you want also to keep modular bundles a&amp;nbsp;good structure is one root databricks.yml, shared variables,clusters, permissions and separate YAML files per use case under resources/jobs/ or resources/pipelines/ because bundles support resource definitions, variables, substitutions, and reusable config, so this avoids one huge YAML file while still keeping centralized standards.&lt;/P&gt;&lt;P&gt;Do not assume "zero notebook changes" (from a personal experience) until imports and parameters are tested and if notebooks already use dbutils.widgets.get() for ADF parameters the migration can be close to zero code change. But if notebooks depend on workspace relative imports or repo paths, moving to wheels may require validating that the package name and import paths stay identical.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 17:58:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/transitioning-from-adf-to-databricks-workflows-best-practices-in/m-p/155730#M54298</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-04-28T17:58:18Z</dc:date>
    </item>
  </channel>
</rss>

