Impossibility to have multiple versions of the same Python package installed
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2025 01:21 PM
Hello,
We package our Spark jobs + utilities in a custom package to be used in wheel tasks in Databricks. In my opinion, having several versions of this job (say "production" and "dev") run on the same cluster against different versions of this custom package is a completely valid requirement to facilitate a somewhat resource-friendly CI/CD workflow.
Alas, Databricks does not allow this since wheel libraries end up being installed cluster-wide and only one version of the same library is allowed at a time. To make matter more inconvenient - the cluster needs to be re-started to uninstall a library.
Since we cannot be the only team facing this issue my question is: how to circumvent this shortcoming. Rolling everything into one script - ugly. Notebooks - not an option either.
Thank you,David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-10-2025 09:17 AM
If someone comes across this post - as per documentation, library/package installation can be Notebook-scoped. Thus, in order to overcome the limitation described in the initial post instead we are experimenting with Notebook tasks whose only responsibility it is to install the custom library using %pip install followed by a call to main() of module which contains the actual processing logic.
I am surprised that running PySpark jobs packaged as .whl in isolation is not something that Databricks provides out of the box. Ways to do so via for instance packaged virtual environments are described in PySpark's documentation and I would have expected Databricks to handle .whl tasks in such a way without the user having to worry about one job interfering with another.
Regards,
David

