Unable to access external table created by DLT
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Saturday
I originally set the Storage location in my DLT as abfss://{container}@{storageaccount}.dfs.core.windows.net/...
But when running the DLT I got the following error:
So I decided to leave the above Storage location blank and define the path parameter in @Dlt.table instead:
In doing so DLT runs fine and I can even see the location of the files in the path above, which I can also read from a notebook:
But when I go over to SQL Editor and use the Serverless Starter Warehouse cluster, I can't access the tables:
I know it's probably something to do with not running spark.conf.set("fs.azure.account..."), but how do I get round that? It'd also be nice to not have to run those lines in all my notebook, guessing there's a way to add them to the cluster configuration or something?
Before suggesting to upgrade to Unity Catalog, that is indeed my plan but I want to be able to at least prove this works for Hive Metastore first.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Tuesday - last edited Tuesday
Hi @Tommy ,
Thanks for your question.
I would encourage you to verify once using a Pro SQL Warehouse temporarily instead of a Serverless SQL Warehouse given the compute differences between the two - Pro compute resides in your data plane, Serverless compute is Databricks-managed. If it works as expected using a Pro Warehouse, there is a good indication that there is an issue with the network path to the Databricks-managed Serverless compute. If that is concluded, you can use docs such as this to guide you further on the Serverless setup: https://learn.microsoft.com/en-us/azure/databricks/admin/sql/serverless.
Additionally, Workspace-level SQL Warehouse configurations can be managed by a Workspace Admin via:
- Settings / Workspace admin / Compute / and then clicking "Manage" next to "SQL warehouses and serverless compute".
- This would be where you'd managed configs such as the `spark.conf.set("fs.azure.account...")`, as applicable, for all Warehouses in the workspace
Hope this helps

