Monday
Hi there,
I am trying to migrate my auto loader job to use file events, but it's failing with this error:
com.databricks.sql.util.UnexpectedHttpStatus: Failed to list objects. There are problems on the location that need to be resolved. Details: Failed to provision file events resources during queue.create operation.
Here are the roles I have assigned to the Databricks Access Connector in Azure:
I have tried recreating the external location from scratch.
Here's my code:
spark.readStream.format("cloudFiles")
.option("cloudFiles.format", "json")
.option("cloudFiles.useManagedFileEvents", True)
.option("cloudFiles.schemaLocation", SCHEMA_PATH)
.option("cloudFiles.inferColumnTypes", True)
.option("cloudFiles.schemaEvolutionMode", "addNewColumns")
.load(INPUT_FILES_PATH)
Thank you for taking the time to read my question!
Monday
Hi @mjtd .
The com.databricks.sql.util.UnexpectedHttpStatus: Failed to list objects error during migration to File Events mode typically indicates one of two things: either the External Location hasn't had File Events enabled yet in Unity Catalog, or the Databricks Access Connector is missing one or more required Azure RBAC roles.
Step 1 — Enable File Events on the External Location first
Before setting cloudFiles.useManagedFileEvents=True in your stream, you must explicitly enable File Events on the External Location in Unity Catalog. Without this, the useManagedFileEvents flag will fail at the infrastructure setup phase with SQL or via the Unity Catalog UI: Catalog Explorer → External Locations → (your location) → Edit → Enable File Events. You can verify the setup is correct by clicking Test Connection — look for a green checkmark on the File Events Read item.
Step 2 — Assign the correct RBAC roles to the Access Connector
The Access Connector for Azure Databricks needs the following Azure roles (in addition to Storage Blob Data Contributor which you may already have):
Step 3 — Register the EventGrid resource provider
If this is your first time using File Events in the subscription, make sure Microsoft.EventGrid is registered.
Step 4 — Stop the old stream, tear down legacy notification resources, then restart
Per the official migration guide, you should:
df = (spark.readStream
.format("cloudFiles")
.option("cloudFiles.format", "parquet") # or json/csv/etc.
.option("cloudFiles.useManagedFileEvents", "true") # the new flag
.load("<path>")
)
Let me know if this can help you and If this helped you, please give it a 👍 Kudo — it helps others find the answer too!
Monday
Thanks for the quick response!
1. File events are enabled on the external location. But connection test failed with the same queue.create permission error. I have no clue which permission is still missing.
2. Microsoft.EventGrid is registered.
3. I did not use file notifications before. And I've stopped the old stream and am restarting on every try.
Monday
Hi @mjtd,
A few things to double‑check... Make sure you’re running in a Unity Catalog enabled workspace and that your source path is under a UC external location or volume, not just a raw storage URL. Auto Loader file notifications with managed file events are only supported on ADLS Gen2 (abfss://) and UC volumes over it. They are not supported on Azure Blob Storage (blob.core.windows.net). If your external location points at Blob, you’ll need to either move to ADLS Gen2 or fall back to directory listing/classic notifications.
Also, make sure your cluster is on Databricks Runtime 14.3 LTS or above.
If all of the above are true, and the test connection still fails with a queue.create error, checking the storage account’s Activity Log for failed queueServices/queues/write operations (and the caller identity) is the next best step.
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Tuesday
I'm so sorry for this. Turns out I've been assigning roles to the wrong service account. I recently got access to the Storage Credential in Databricks and noticed the different service account.
These roles were enough:
Thanks for being so helpful!
Tuesday
Hi @mjtd,
No problem. Glad you're able to progress.
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.