cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Writing to Foreign catalog

Fatimah-Tariq
New Contributor III

I have a running notebook job where I am doing some processing and writing the tables in a foreign catalog. It has been running successfully for about an year. The job is scheduled and runs on job cluster with DBR 16.2

Recently, I had to add new notebook to the job doing almost the same operations but while testing this notebook standalone with an interactive cluster created with the DBR 16.4(because 16.2 is deprecated and not available anymore for interactive clusters), it's throwing me an error on the code block where I'm writing to the foreign catalog saying "SparkConnectGrpcException: (java.lang.SecurityException) PERMISSION_DENIED: Only READ credentials can be retrieved for foreign tables."

Can anyone please help me understand the scenario and possible solutions to test my new notebook in an interactive env?

Please note that the new notebook does not throw me an error on this write statement when I add it to my scheduled job (I believe it's because of the older DBR version for the job cluster for this scheduled job that still runs all my other notebooks successfully too). But, I need to test my code first in an interactive env before adding it to the prod job. Any help is appreciated.

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @Fatimah-Tariq , thanks for calling me out to help with your issue.  I did some digging and here is what I found:

Let’s break down what’s really happening here and a few safe, predictable ways to test your new notebook interactively.

What’s happening

Foreign catalog tables created through Lakehouse Federation are, by design, read-only in Databricks. Unity Catalog will happily vend read credentials for them, but it will not issue write credentials under any circumstance. So when you test a write against a foreign table, you’ll see exactly the error you pasted: “Only READ credentials can be retrieved for foreign tables.”

That message is UC doing its job. Any attempt to write to a federated table or a path governed by a foreign catalog’s authorization boundary will trigger UC to ask for write credentials it refuses to provide. Hence the failure.

Now, why you’re still seeing success in your scheduled job on DBR 16.2: older runtimes and certain job-cluster paths can still fall back to legacy auth. If that happens, UC’s stricter foreign-table checks never fire, which explains why the job runs cleanly while your 16.4 interactive cluster shuts it down immediately.

How to test safely in an interactive environment

Depending on what outcome you need, here are the cleanest, least risky ways to validate:

• Test on a job cluster.

Spin up a small dev job using a job cluster and manually trigger it with your new notebook. This mirrors prod behavior and avoids the interactive cluster differences entirely. It’s the simplest way to keep things safe and predictable.

• Point your writes to a UC-native target when testing.

On your 16.4 interactive cluster, redirect writes to a catalog/schema you actually own—something UC-managed or a UC external location that isn’t federated. Validate the transformations there, then switch back to the foreign-catalog target only in the job where you already know it works today.

• If you must write to the same path as production, use fallback mode sparingly.

If the target path sits under a federated connection’s authorized location, you can temporarily enable fallback mode on the UC External Location. That lets the write go through using legacy auth instead of UC’s foreign-table checks. It works, but treat it like a temporary testing valve, not an everyday setting.

• If this is Glue/HMS federation, write through hive_metastore.

Federated HMS objects are read-only, but hive_metastore (the legacy metastore) can still be writable depending on how the environment is set up. For interactive work, write to hive_metastore instead of the federated Glue catalog object.

• Double-check that your target location isn’t explicitly marked read-only.

That’s a different enforcement path, but it’s worth ruling out if permissions were recently tightened.

Why this matters long-term

Lakehouse Federation’s contract is clear: foreign tables are read-only from UC’s perspective. If your production job is slipping through on legacy or fallback behavior, treat that as a flag to revisit the pattern. A safer long-term approach is to stage data into UC-native storage and then load it into the remote system using that system’s writer or connector. It keeps you out of the line of fire as runtimes continue to tighten enforcement.

Hope this helps, Louis.

View solution in original post

7 REPLIES 7

Fatimah-Tariq
New Contributor III

Hi @Louis_Frolio, saw you guiding a lot of fellow community members. Would be really great if you could help me with any informed suggestion on my scenario too. Thanks!

saurabh18cs
Honored Contributor II

Hi @Fatimah-Tariq possible for you to ask for one of following permissions from your admin looking after unity catalog.? granting permissions at higer level will pass thru to lower categories.

 

-- Grant WRITE on the catalog
GRANT WRITE ON CATALOG <catalog_name> TO `<user_or_group>`;

-- Or grant on a specific schema
GRANT WRITE ON SCHEMA <catalog_name>.<schema_name> TO `<user_or_group>`;

-- Or grant on a specific table
GRANT WRITE ON TABLE <catalog_name>.<schema_name>.<table_name> TO `<user_or_group>`;

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @Fatimah-Tariq , thanks for calling me out to help with your issue.  I did some digging and here is what I found:

Let’s break down what’s really happening here and a few safe, predictable ways to test your new notebook interactively.

What’s happening

Foreign catalog tables created through Lakehouse Federation are, by design, read-only in Databricks. Unity Catalog will happily vend read credentials for them, but it will not issue write credentials under any circumstance. So when you test a write against a foreign table, you’ll see exactly the error you pasted: “Only READ credentials can be retrieved for foreign tables.”

That message is UC doing its job. Any attempt to write to a federated table or a path governed by a foreign catalog’s authorization boundary will trigger UC to ask for write credentials it refuses to provide. Hence the failure.

Now, why you’re still seeing success in your scheduled job on DBR 16.2: older runtimes and certain job-cluster paths can still fall back to legacy auth. If that happens, UC’s stricter foreign-table checks never fire, which explains why the job runs cleanly while your 16.4 interactive cluster shuts it down immediately.

How to test safely in an interactive environment

Depending on what outcome you need, here are the cleanest, least risky ways to validate:

• Test on a job cluster.

Spin up a small dev job using a job cluster and manually trigger it with your new notebook. This mirrors prod behavior and avoids the interactive cluster differences entirely. It’s the simplest way to keep things safe and predictable.

• Point your writes to a UC-native target when testing.

On your 16.4 interactive cluster, redirect writes to a catalog/schema you actually own—something UC-managed or a UC external location that isn’t federated. Validate the transformations there, then switch back to the foreign-catalog target only in the job where you already know it works today.

• If you must write to the same path as production, use fallback mode sparingly.

If the target path sits under a federated connection’s authorized location, you can temporarily enable fallback mode on the UC External Location. That lets the write go through using legacy auth instead of UC’s foreign-table checks. It works, but treat it like a temporary testing valve, not an everyday setting.

• If this is Glue/HMS federation, write through hive_metastore.

Federated HMS objects are read-only, but hive_metastore (the legacy metastore) can still be writable depending on how the environment is set up. For interactive work, write to hive_metastore instead of the federated Glue catalog object.

• Double-check that your target location isn’t explicitly marked read-only.

That’s a different enforcement path, but it’s worth ruling out if permissions were recently tightened.

Why this matters long-term

Lakehouse Federation’s contract is clear: foreign tables are read-only from UC’s perspective. If your production job is slipping through on legacy or fallback behavior, treat that as a flag to revisit the pattern. A safer long-term approach is to stage data into UC-native storage and then load it into the remote system using that system’s writer or connector. It keeps you out of the line of fire as runtimes continue to tighten enforcement.

Hope this helps, Louis.

Dear @Louis_Frolio, thankyou for the detailed answer. It certainly cleared things out for me. Totally agree with the suggestion you provided for the long term stability of the production job and I will surely be implementing that. 

About the solutions you provided for the quick test in an interactive environment, I guess testing on a job cluster is the best one to try first. I even tried doing it myself earlier too, but, I could not figure out the way to run a notebook interactively with a job cluster. Job cluster does not appear in the cluster's selection dropdown in the notebook. Would be really great if you could guide me to any resource that explain how can we run notebooks with the job cluster? 
Thankyou!

Louis_Frolio
Databricks Employee
Databricks Employee

@Fatimah-Tariq , that is right, you cannot use job clusters interactively. You would need to run a small workflow job.  Cheers, Lou.

iyashk-DB
Databricks Employee
Databricks Employee

This error is expected when writing through a foreign catalog (Lakehouse Federation): foreign tables are read‑only, and Unity Catalog will only vend READ credentials for them, which surfaces as “Only READ credentials can be retrieved for foreign tables.”
On DBR 16.4 your interactive cluster is enforcing that policy; the older job on DBR 16.2 likely avoids the check (for example by writing to a non‑authorized path or via legacy/non‑UC credentials), so it appears to succeed.

To test interactively, either write to a managed UC or non‑foreign external schema (recommended), or materialize from the foreign table into UC using a materialized view and operate on the copy; if you must write to the same storage path that’s covered by a foreign catalog, remove the path from the foreign catalog’s authorized paths or enable the external location’s “fallback mode” to allow instance‑profile writes (with care, since it bypasses UC write governance).

Fatimah-Tariq
New Contributor III

Thank you @Louis_Frolio! your suggestions really helped me understanding the scenario.