Hi @Chiran-Gajula,
Thanks for the additional context. Unfortunately, there is no way to rename a catalog without breaking existing references. Some form of change to pipelines/notebooks is unavoidable. Here is an approach you can consider to minimise impact so that you don't have to touch everything at once.
Given that constraint, the best you can do is structure things so you donโt have to touch everything at once. You can consider creating a new catalog + view proxy which gives you a new name without touching old code. The goal is to keep training as the real catalog, layer development_training on top for new consumers.
For example, create a catalog and then for each schema in training, create a matching schema and views that proxy the original tables.
CREATE CATALOG development_training;
-- example for one schema
CREATE SCHEMA development_training.sales;
-- for each table in training.sales
CREATE VIEW development_training.sales.orders AS
SELECT * FROM training.sales.orders;
Your existing code using training.sales.* keeps working unchanged. New code can start using development_training.sales.* immediately. Any reads will see the same data, but writes still need to go to the base tables (views are not generally updatable). You can script the view creation using information_schema.tables so you donโt have to do this by hand. This doesnโt rename the catalog, but it gets you the new name everywhere going forward without coordinating a big-bang change.
Over time, you can move write pipelines to target real tables under development_training (or keep training as the physical home and accept that development_training is a logical alias via views). Once youโre confident nothing critical uses training.* anymore, you can deprecate or drop it.
Thereโs no way to avoid some coordination if the physical catalog name itself must change... but the new catalog + proxy views approach lets you avoid a disruptive all-at-once refactor and gives you a practical alias-like behaviour for the read-path. Totally appreciate this may sound convoluted and messy... and you may instead consider doing the actual migration by working with various teams but thought I'll share it anyway because dealing with multiple teams could sometimes be a challenge in larger organisations.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.
Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***