we are using mount points via service principals approach to connect the storage account. and using the same mount points to create external tables in hive meta store. but now we are trying using only service principals setup ,so we need to change the external locations of tables from (dbfs/mnt/)..... path to adfss:// protocol (location remains same but the approach is by adfss URL) .i can alter the existing tables location , working fine but the physical tables in catalog explorer are not opening .
Tried to drop the tables and recreated with new adfss location, but still tables is unaccusable in catalog explore .
Service principal setup is running fine i could load the data from storage account, as well as from the tables in hive meta store.
*cluster is not unity enabled and it is not need
code:
1. spark. SQL("""
ALTER TABLE schema. Table
SET LOCATION 'abfss://container@storageaccount.dfs.core.windows.net/foldername/filename.delta'
""")
2 . creating external table:
CREATE EXTERNAL TABLE hive_metastore.schema.table
USING DELTA
LOCATION "abfss://container@storageaccount.dfs.core.windows.net/foldername/filename.delta"
Error:
Failure to initialize configuration for storage account stalyceprdevbdls001.dfs.core.windows.net: Invalid configuration value detected for fs.azure.account.keyInvalid configuration value detected for fs.azure.account.key
.