HI Team,
once after implementing unity catalog and start migrating external tables from legacy hive to unity catalog, i am seeing in articles that we need to make changes to our workloads to be in sync with 3 level name space
eg: if i have 50 notebooks which uses legacy hive, usually there is no 3-level name space and we can run our notebooks based on standard spark queries. but after migrating to unity catalog if i need to run those notebooks and i am not using default hive_metastore config in spark level, do i need to manually change to use catalog and followed by schema.tablename in spark/python codes in notebooks to work without any issues
Note: i have seen article where we can use both legacy and unity catalog until we feel confident on functionalities are in tack with our requirement and drop legacy tables. if we drop legacy, do we need to change all notebooks to use catalog and 3 level namespaces
do we have backup mechanism in case if we want to revert back to legacy. is that INFORMATION_SCHEMA backup or any other mechanisam