Hi @satycse06,
Have you considered Declarative Automation Bundles (Previously called Databricks Asset bunddles) for this? This is exactly the type of problem it solves.
You can still keep your DLT code and a databricks.yml bundle file in Azure DevOps Git. Then, use the bundle to declare your DLT pipeline as a resource with separate dev/prod targets and perโenvironment overrides. And then, have an Azure DevOps pipeline check out that repository and run a bundle deployment against your Databricks on AWS workspace.
That way... your PySpark/DQ logic can still be packaged as a wheel and stored in a volume or S3. The DLT pipeline just points at that wheel path. This will get rid of all the manual workloads and you can promote from lower to higher environments with Git & CI/CD action.
This is a recommended CI/CD pattern for jobs and DLT on Databricks today, and it works fine with Azure DevOps even when your Databricks workspace runs on AWS.
Check this official documentation too. Very relevant to your requirement.
You can also take a look at a relevant community post explaining this in even more detail.
Hope this helps. Let me know if you have any specific questions.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.
Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***