04-16-2024 12:19 AM
A critical issue has arisen that is impacting our deployment planning for our client. We have encountered a challenge with our Azure CI/CD pipeline integration, specifically concerning the deployment of Python files (.py). Despite our best efforts, when deploying these files through the pipeline, they are being converted into normal notebooks. As a result, we are forced to manually recreate the .py files and copy-paste the code to run the notebooks.
We have extensively searched for a solution to this issue across various resources, but unfortunately, we have not found clear and comprehensive documentation that provides a step-by-step guide to address this specific situation.
If someone can provide guidance or a solution it would be greatly appreciated.
04-17-2024 01:30 AM
What is your pipeline? We propagate notebooks using Azure Devops Repos with PRs and merges. like that files do not get converted.
04-25-2024 08:36 AM
Experiencing a similar issue that we are looking to resolve except the files are .sql. We have a process that has 1 orchestration notebook , calling multiple .sql files. These .sql files are being converted to regular databricks notebooks when deploying so we are having to promote them to production in a manual manner right now - which is not ideal. Any insight/support here on this same issue would be appreciated
12-05-2024 09:13 AM
Did you manage to find the solution? If so, could you share it?
I have the same problem, and if I find the solution, I'll share it here.
03-12-2025 03:23 PM
We had the same problem but you can deploy the python files as 'RAW' file types with Databricks CLI.
Ref on usage from ADO Pipelines: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/auth-with-azure-devops
03-12-2025 03:25 PM
Another option is Databricks Asset Bundles.
06-07-2025 02:56 PM
I'm using databricks asset bundles and am still experiencing the same problem as above. Does anyone know how to get around this using VS Code and asset bundles?