I have created a cluster which has advance options set for external hive metastore. I created a mount using sp for my data lake and then create a table using the mount . As per my knowledge the metadata of the table will be stored in sql db but i would like to know where the mount info will be stored. Will it be sqldb or in databricks?
furthur to this i created another workspace for another team and created a cluster with same advance option to link to external metastore. I havent created the mount in new workspace. If i will access the table will it work? If yes, then it means the mount info was in sql db. If not , then so i need to create mount with same name in the new workspace? What if the previous mount is pointing to prod and new workspace mount is pointing to dev, is it possible scenario.
as the new workspace team is separate team we dont want them to inherit our mount info from sql db.