cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Transitioning from ADF to Databricks Workflows: Best Practices in a Multi-Workspace (dev-prod)

Darshan137
New Contributor II

Hi Community,

We have a data processing framework running on Azure Databricks with Unity Catalog, and we're evaluating options to consolidate our orchestration entirely within the Databricks ecosystem.

CURRENT ARCHITECTURE:

  • ~20 use cases, each containing 3-6 Python notebooks organized by business domain
  • A shared Python utility package (with init.py) used across all use cases
  • Two Databricks workspaces: Development and Production
  • Unity Catalog for data governance and storage
  • Azure Data Factory for orchestrating notebook execution (task ordering, dependencies)
  • Azure DevOps CI/CD pipelines (one per use case) deploying notebooks to workspaces via Terraform templates
  • Environment-specific configs (Key Vault names, service connections, catalog references) managed through ADO variable groups and YAML templates

WHAT WE WANT TO ACHIEVE:

  • Replace ADF orchestration with native Databricks orchestration (Lakeflow Jobs / Pipelines)
  • Manage environment-specific parameters (dev/prod catalog names, Key Vault, etc.) cleanly across workspaces
  • Keep our shared Python utility package working across all use cases without duplication
  • Zero changes to existing notebook code

QUESTIONS:

  1. Orchestration: What is the recommended Databricks-native approach to replace ADF for orchestrating notebook execution with task dependencies? We need both sequential and parallel task support.

  2. Project structure: With ~20 use cases, what is the recommended way to organize job/pipeline definitions? One monolithic config vs. modular per-use-case definitions?

  3. Shared library code: Our notebooks import from a shared Python package. What is the best way to handle this - sync the entire repo, or package it as a wheel?

  4. Cross-workspace promotion: For promoting from dev to prod workspace, what authentication method works best - Service Principal with OAuth (M2M) or PAT tokens? Any Unity Catalog permission considerations?

  5. CI/CD: We currently use Azure DevOps plus Terraform for deploying notebook code and job definitions to both workspaces. For those who have made a similar migration - does it make sense to replace Azure DevOps with a Databricks-native deployment approach, or do most teams keep an external CI/CD tool alongside Databricks orchestration?

  6. Incremental migration: Can we migrate one use case at a time while others still run via the legacy ADF setup, without conflicts?

Any real-world experience, recommended approaches, or reference architectures would be very helpful. Is there any tutorial available for it then please provide the link also.

Thanks!

2 REPLIES 2

Lu_Wang_ENB_DBX
Databricks Employee
Databricks Employee

Answers to your questions.

  1. Orchestration (replace ADF)

    • Use Lakeflow Jobs (Databricks Jobs) as the primary orchestrator: one job per use case with a task graph (notebook / SQL / pipeline tasks) to express both sequential and parallel branches, retries, timeouts, and alerts.
    • For ELT-heavy flows, define Lakeflow Spark Declarative Pipelines for the data pipeline itself, and call them from Lakeflow Jobs for control-flow and scheduling.
  2. Project structure (20 use cases + env-specific params)

    • Recommended: one repo with modular bundle configs per use case, e.g. resources/jobs/usecase_X.yml and (optionally) resources/pipelines/usecase_X.yml, plus shared cluster definitions and variables.
    • Use Declarative Automation Bundles targets + variables for environment-specific values (dev/prod catalogs, Key Vault/secret scopes, workspace URLs) instead of duplicating YAML: override only what changes per target (dev, prod).
  3. Shared library code (Python utils)

    • Best practice is to package the shared utils as a wheel (built in CI), store it in an artifact feed or UC volume, and reference it as a job/pipeline library dependency; notebook imports stay the same, and all use cases share the same versioned package.
    • Repo-syncing the whole codebase and importing via workspace-relative paths works but scales worse; prefer wheels for anything you run in prod or across multiple workspaces.
  4. Cross-workspace promotion (dev โ†’ prod)

    • Use an Azure AD Service Principal with OAuth workload identity federation for the Databricks CLI / Bundles; this is the recommended, most secure CI/CD auth pattern and avoids long-lived PATs.
    • Treat the SP as a first-class principal in Unity Catalog: grant it workspace access plus the required USE CATALOG, USE SCHEMA, and table privileges in each environment; many teams use separate targets and (optionally) separate SPs for dev vs prod.
  5. CI/CD (Azure DevOps, Terraform, Databricks-native)

    • Most teams keep Azure DevOps (or GitHub Actions/Jenkins) as their CI/CD engine and introduce Declarative Automation Bundles for Databricks-side IaC (jobs, pipelines, clusters, permissions); ADO just runs databricks bundle validate/deploy --target=dev|prod steps.
    • Terraform remains useful for workspace-level infra (workspaces, networks, storage, UC metastore), while Bundles manage workloads and configs (jobs/pipelines/dashboards/etc.) in the same repo as your notebooks and Python code.
  6. Incremental migration (ADF โ†’ Lakeflow Jobs)

    • Yes: you can migrate one use case at a time by creating the equivalent Lakeflow Job/Pipeline, validating it in dev, then switching only that use caseโ€™s schedule from ADF to Databricks; the rest continue to run in ADF without interference.
    • Just ensure each pipeline has one active scheduler (disable or pause the corresponding ADF pipeline once the Databricks job is live) and keep all storage/UC references identical so data remains consistent.

Relevant Azure Databricks docs:

amirabedhiafi
New Contributor II

Hello @Darshan137  !

Few things I will add to @Lu_Wang_ENB_DBX  answer that I did on a similar project.
If ADF currently passes values such as environment, run date, catalog, schema, or business domain, define a clear parameter contract in Lakeflow Jobs because DBKS supports job parameters, task parameters, dynamic value references, If/else conditions, and task values so this can replace many ADF variable or expression patterns. 

And do not only think about the SP used by CI/CD, you also decide which identity the jobs actually run as. With bundles, run_as can be configured separately from the deployment identity which is useful for production governance and  UC access control.

One of the things I struggled with is concurrency and scheduler ownership so I advise you during migration, set the DBKS job concurrency ioften max_concurrent_runs: 1 for batch pipelines to avoid overlapping runs and verify that each use case has only one active scheduler: either ADF or Databricks not both.

If you want also to keep modular bundles a good structure is one root databricks.yml, shared variables,clusters, permissions and separate YAML files per use case under resources/jobs/ or resources/pipelines/ because bundles support resource definitions, variables, substitutions, and reusable config, so this avoids one huge YAML file while still keeping centralized standards.

Do not assume "zero notebook changes" (from a personal experience) until imports and parameters are tested and if notebooks already use dbutils.widgets.get() for ADF parameters the migration can be close to zero code change. But if notebooks depend on workspace relative imports or repo paths, moving to wheels may require validating that the package name and import paths stay identical.

 

 

If this answer resolves your question, could you please mark it as โ€œAccept as Solutionโ€? It will help other users quickly find the correct fix.

Senior BI/Data Engineer | Microsoft MVP Data Platform | Microsoft MVP Power BI | Power BI Super User | C# Corner MVP