Use unity catalog access connector for autoloader file notification events
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-03-2024 12:26 PM
We have a databricks access connector, and we have granted it access to file events. But how do we now use that access connector in cloudfiles/autoloader with file-notifications? If I provide the id in the "cloudFiles.clientId" option, I am asked to also provide a secret or a certificate, which we naturally don't have.
The only alternative I have found is to keep a seperate clientId which I grant access, and use that for the file notification stuff. Is that the way we are supposed to do it? Does this other clientId only need the Storage Queue Data Contributor permissions, or does it also need read access to the actuall files?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-06-2024 08:33 AM - edited 12-06-2024 08:33 AM
Thanks for your question!
If the access connector is still prompting for a secret or certificate when used in cloudFiles.clientId, this typically indicates that the authentication method is not being properly recognized. Here's what to check:
- Access Connector Role: Ensure the access connector has the correct role assigned in Azure, such as "Storage Queue Data Contributor" for file notifications. Missing roles could cause authentication fallback to expect a client secret.
- Databricks Runtime Compatibility: Verify that your Databricks runtime version supports using the access connector for file notifications in Auto Loader. Older runtime versions may not fully support this feature.
- Auto Loader Options: Double-check that no conflicting options, such as cloudFiles.clientSecret or cloudFiles.clientCertificate, are inadvertently set in the configuration, as this could override the access connector's authentication.
- Connector Permissions: Confirm that the access connector has explicit permissions on the storage queue and the container where file events are stored.
If these checks are all in place and it still fails, it may require additional debugging with Databricks support to ensure compatibility with the current setup.

