Hello @Darshan137 !
Few things I will add to @Lu_Wang_ENB_DBX answer that I did on a similar project.
If ADF currently passes values such as environment, run date, catalog, schema, or business domain, define a clear parameter contract in Lakeflow Jobs because DBKS supports job parameters, task parameters, dynamic value references, If/else conditions, and task values so this can replace many ADF variable or expression patterns.
And do not only think about the SP used by CI/CD, you also decide which identity the jobs actually run as. With bundles, run_as can be configured separately from the deployment identity which is useful for production governance and UC access control.
One of the things I struggled with is concurrency and scheduler ownership so I advise you during migration, set the DBKS job concurrency ioften max_concurrent_runs: 1 for batch pipelines to avoid overlapping runs and verify that each use case has only one active scheduler: either ADF or Databricks not both.
If you want also to keep modular bundles a good structure is one root databricks.yml, shared variables,clusters, permissions and separate YAML files per use case under resources/jobs/ or resources/pipelines/ because bundles support resource definitions, variables, substitutions, and reusable config, so this avoids one huge YAML file while still keeping centralized standards.
Do not assume "zero notebook changes" (from a personal experience) until imports and parameters are tested and if notebooks already use dbutils.widgets.get() for ADF parameters the migration can be close to zero code change. But if notebooks depend on workspace relative imports or repo paths, moving to wheels may require validating that the package name and import paths stay identical.
If this answer resolves your question, could you please mark it as “Accept as Solution”? It will help other users quickly find the correct fix.
Senior BI/Data Engineer | Microsoft MVP Data Platform | Microsoft MVP Power BI | Power BI Super User | C# Corner MVP