cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to update alias for catalogs

Chiran-Gajula
New Contributor III

Greetings,

Is there a way to create an alias for a Databricks catalog?

 

Current catalog name: training
Desired alias: development_training

 

The goal is that users connecting to either name should see the same schemas, tables, and data

G.Chiranjeevi
3 REPLIES 3

Ashwin_DSA
Databricks Employee
Databricks Employee

Hi @Chiran-Gajula,

No. Unity Catalog doesnโ€™t support aliases for catalogs as of today, so you canโ€™t have both training and development_training resolve to the same catalog object. Catalog names must be unique identifiers. Aliasing is only available for some other objects (for example, model aliases, not catalogs).

I'm curious to understand what you intend to achieve and whether there are ways to do it. Can you elaborate?

If this answer resolves your question, could you mark it as โ€œAccept as Solutionโ€? That helps other users quickly find the correct fix.

Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***

Chiran-Gajula
New Contributor III

I have a use case where I need to rename a catalog without impacting existing pipelines and notebooks, as the current catalog name is referenced across multiple applications. Instead of coordinating with multiple teams to update it everywhere, I was exploring whether thereโ€™s an alternative solutionโ€”specifically, if creating an alias for the catalog is possible.

 

 

G.Chiranjeevi

Hi @Chiran-Gajula,

Thanks for the additional context. Unfortunately, there is no way to rename a catalog without breaking existing references. Some form of change to pipelines/notebooks is unavoidable. Here is an approach you can consider to minimise impact so that you don't have to touch everything at once. 

Given that constraint, the best you can do is structure things so you donโ€™t have to touch everything at once. You can consider creating a new catalog + view proxy which gives you a new name without touching old code. The goal is to keep training as the real catalog, layer development_training on top for new consumers.

For example, create a catalog and then for each schema in training, create a matching schema and views that proxy the original tables. 

CREATE CATALOG development_training;
-- example for one schema
CREATE SCHEMA development_training.sales;

-- for each table in training.sales
CREATE VIEW development_training.sales.orders AS
SELECT * FROM training.sales.orders;
Your existing code using training.sales.* keeps working unchanged. New code can start using development_training.sales.* immediately. Any reads will see the same data, but writes still need to go to the base tables (views are not generally updatable). You can script the view creation using information_schema.tables so you donโ€™t have to do this by hand. This doesnโ€™t rename the catalog, but it gets you the new name everywhere going forward without coordinating a big-bang change.

Over time, you can move write pipelines to target real tables under development_training (or keep training as the physical home and accept that development_training is a logical alias via views). Once youโ€™re confident nothing critical uses training.* anymore, you can deprecate or drop it.

Thereโ€™s no way to avoid some coordination if the physical catalog name itself must change... but the new catalog + proxy views approach lets you avoid a disruptive all-at-once refactor and gives you a practical alias-like behaviour for the read-path. Totally appreciate this may sound convoluted and messy... and you may instead consider doing the actual migration by working with various teams but thought I'll share it anyway because dealing with multiple teams could sometimes be a challenge in larger organisations.

If this answer resolves your question, could you mark it as โ€œAccept as Solutionโ€? That helps other users quickly find the correct fix.

Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***