In databricks deployment .py files getting converted to notebooks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2024 12:19 AM
A critical issue has arisen that is impacting our deployment planning for our client. We have encountered a challenge with our Azure CI/CD pipeline integration, specifically concerning the deployment of Python files (.py). Despite our best efforts, when deploying these files through the pipeline, they are being converted into normal notebooks. As a result, we are forced to manually recreate the .py files and copy-paste the code to run the notebooks.
We have extensively searched for a solution to this issue across various resources, but unfortunately, we have not found clear and comprehensive documentation that provides a step-by-step guide to address this specific situation.
If someone can provide guidance or a solution it would be greatly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2024 01:30 AM
What is your pipeline? We propagate notebooks using Azure Devops Repos with PRs and merges. like that files do not get converted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2024 08:36 AM
Experiencing a similar issue that we are looking to resolve except the files are .sql. We have a process that has 1 orchestration notebook , calling multiple .sql files. These .sql files are being converted to regular databricks notebooks when deploying so we are having to promote them to production in a manual manner right now - which is not ideal. Any insight/support here on this same issue would be appreciated
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Did you manage to find the solution? If so, could you share it?
I have the same problem, and if I find the solution, I'll share it here.