I've run into an issue with no clear path to resolution.
Due to various integrations we have in Unity Catalog, some jobs we have to run in a Shared Cluster environment in order to authenticate properly to the underlying data resource.
When setting up a workflow task, we pointed the run config to a python wheel (deployed in a Unity Catalog Volume). If we use a single user cluster, everything works smoothly and completes as expected.
When setting up an additional task that requires a shared cluster to access credentials, we get the following error:
run failed with error message
Library installation failed for library due to user error. Error messages: The workspace admin setting `Compute` > `Enable libraries and init scripts on shared Unity Catalog clusters` is retired.
Please turn off the setting on Shared Unity Catalog clusters with Databricks Runtime 16.0 and above.
Fair enough, understandably settings become deprecated over time.
However, this same workspace has workspaces running older (and fully supported) LTS versions, and (I assume) disabling this feature at the workspace level would prevent those clusters from operating correctly. The error message implies that there is a way to scope the settings change at the cluster level; however, there is no option in the cluster configuration, no documentation of this deprecation, and no instructions on how to disable it just for this cluster. To make things more frustrating, there isn't any documentation that I have been able to find that explains what this option does in the first place (There are no links in the UI, searching the name of the option results in nothing, etc.).
I've scoured through as much documentation as I could find and searched multiple resources and not found others that have had a similar problem. Perhaps there is a key/value option that controls this feature at the cluster level, but there doesn't seem to be a canonical source for what all the config options are (outside the base spark options).
Has anyone else run into this issue? Thanks.