I know the documentation for setting up Git credentials for Service Principals: you have to use a PAT from your Git provider, which is inevitably tied to an user and has a lifecycle of its own.
Doesn't this kind of defeats the purpose of running a job as a service principal, or is it just me?
In principle, running a job as a service principal would help decoupling it from a specific user and also centralizing the permissions the service principal may require to run its job.
But in the case where a job fetches code from Git (best practice), there's inevitably still a coupling to a user at Git provider level. You not only face the risk of the user leaving the org and all workflows failing, but you have to manage the PATs lifecycle and rotation.
This becomes a nightmare if you want a moderately granular assignment of service principals to workflow (e.g. if you run a data mesh where you'd like each data product to have its own service principal to run the data product's workflows).
Are there any perspectives on Databricks' roadmap of adding support for OIDC authentication? Several Git providers (e.g. GitHub, Azure DevOps) are capable of workload identity federation, so I wonder why Databricks hasn't caught up and implemented OIDC to leverage this.
It would greatly improve the platform experience of Databricks.