Wednesday
I have a running notebook job where I am doing some processing and writing the tables in a foreign catalog. It has been running successfully for about an year. The job is scheduled and runs on job cluster with DBR 16.2
Recently, I had to add new notebook to the job doing almost the same operations but while testing this notebook standalone with an interactive cluster created with the DBR 16.4(because 16.2 is deprecated and not available anymore for interactive clusters), it's throwing me an error on the code block where I'm writing to the foreign catalog saying "SparkConnectGrpcException: (java.lang.SecurityException) PERMISSION_DENIED: Only READ credentials can be retrieved for foreign tables."
Can anyone please help me understand the scenario and possible solutions to test my new notebook in an interactive env?
Please note that the new notebook does not throw me an error on this write statement when I add it to my scheduled job (I believe it's because of the older DBR version for the job cluster for this scheduled job that still runs all my other notebooks successfully too). But, I need to test my code first in an interactive env before adding it to the prod job. Any help is appreciated.
Thursday
Greetings @Fatimah-Tariq , thanks for calling me out to help with your issue. I did some digging and here is what I found:
Letโs break down whatโs really happening here and a few safe, predictable ways to test your new notebook interactively.
Whatโs happening
Foreign catalog tables created through Lakehouse Federation are, by design, read-only in Databricks. Unity Catalog will happily vend read credentials for them, but it will not issue write credentials under any circumstance. So when you test a write against a foreign table, youโll see exactly the error you pasted: โOnly READ credentials can be retrieved for foreign tables.โ
That message is UC doing its job. Any attempt to write to a federated table or a path governed by a foreign catalogโs authorization boundary will trigger UC to ask for write credentials it refuses to provide. Hence the failure.
Now, why youโre still seeing success in your scheduled job on DBR 16.2: older runtimes and certain job-cluster paths can still fall back to legacy auth. If that happens, UCโs stricter foreign-table checks never fire, which explains why the job runs cleanly while your 16.4 interactive cluster shuts it down immediately.
How to test safely in an interactive environment
Depending on what outcome you need, here are the cleanest, least risky ways to validate:
โข Test on a job cluster.
Spin up a small dev job using a job cluster and manually trigger it with your new notebook. This mirrors prod behavior and avoids the interactive cluster differences entirely. Itโs the simplest way to keep things safe and predictable.
โข Point your writes to a UC-native target when testing.
On your 16.4 interactive cluster, redirect writes to a catalog/schema you actually ownโsomething UC-managed or a UC external location that isnโt federated. Validate the transformations there, then switch back to the foreign-catalog target only in the job where you already know it works today.
โข If you must write to the same path as production, use fallback mode sparingly.
If the target path sits under a federated connectionโs authorized location, you can temporarily enable fallback mode on the UC External Location. That lets the write go through using legacy auth instead of UCโs foreign-table checks. It works, but treat it like a temporary testing valve, not an everyday setting.
โข If this is Glue/HMS federation, write through hive_metastore.
Federated HMS objects are read-only, but hive_metastore (the legacy metastore) can still be writable depending on how the environment is set up. For interactive work, write to hive_metastore instead of the federated Glue catalog object.
โข Double-check that your target location isnโt explicitly marked read-only.
Thatโs a different enforcement path, but itโs worth ruling out if permissions were recently tightened.
Why this matters long-term
Lakehouse Federationโs contract is clear: foreign tables are read-only from UCโs perspective. If your production job is slipping through on legacy or fallback behavior, treat that as a flag to revisit the pattern. A safer long-term approach is to stage data into UC-native storage and then load it into the remote system using that systemโs writer or connector. It keeps you out of the line of fire as runtimes continue to tighten enforcement.
Hope this helps, Louis.
Thursday
Hi @Louis_Frolio, saw you guiding a lot of fellow community members. Would be really great if you could help me with any informed suggestion on my scenario too. Thanks!
Thursday
Hi @Fatimah-Tariq possible for you to ask for one of following permissions from your admin looking after unity catalog.? granting permissions at higer level will pass thru to lower categories.
-- Grant WRITE on the catalog
GRANT WRITE ON CATALOG <catalog_name> TO `<user_or_group>`;
-- Or grant on a specific schema
GRANT WRITE ON SCHEMA <catalog_name>.<schema_name> TO `<user_or_group>`;
-- Or grant on a specific table
GRANT WRITE ON TABLE <catalog_name>.<schema_name>.<table_name> TO `<user_or_group>`;
Thursday
Greetings @Fatimah-Tariq , thanks for calling me out to help with your issue. I did some digging and here is what I found:
Letโs break down whatโs really happening here and a few safe, predictable ways to test your new notebook interactively.
Whatโs happening
Foreign catalog tables created through Lakehouse Federation are, by design, read-only in Databricks. Unity Catalog will happily vend read credentials for them, but it will not issue write credentials under any circumstance. So when you test a write against a foreign table, youโll see exactly the error you pasted: โOnly READ credentials can be retrieved for foreign tables.โ
That message is UC doing its job. Any attempt to write to a federated table or a path governed by a foreign catalogโs authorization boundary will trigger UC to ask for write credentials it refuses to provide. Hence the failure.
Now, why youโre still seeing success in your scheduled job on DBR 16.2: older runtimes and certain job-cluster paths can still fall back to legacy auth. If that happens, UCโs stricter foreign-table checks never fire, which explains why the job runs cleanly while your 16.4 interactive cluster shuts it down immediately.
How to test safely in an interactive environment
Depending on what outcome you need, here are the cleanest, least risky ways to validate:
โข Test on a job cluster.
Spin up a small dev job using a job cluster and manually trigger it with your new notebook. This mirrors prod behavior and avoids the interactive cluster differences entirely. Itโs the simplest way to keep things safe and predictable.
โข Point your writes to a UC-native target when testing.
On your 16.4 interactive cluster, redirect writes to a catalog/schema you actually ownโsomething UC-managed or a UC external location that isnโt federated. Validate the transformations there, then switch back to the foreign-catalog target only in the job where you already know it works today.
โข If you must write to the same path as production, use fallback mode sparingly.
If the target path sits under a federated connectionโs authorized location, you can temporarily enable fallback mode on the UC External Location. That lets the write go through using legacy auth instead of UCโs foreign-table checks. It works, but treat it like a temporary testing valve, not an everyday setting.
โข If this is Glue/HMS federation, write through hive_metastore.
Federated HMS objects are read-only, but hive_metastore (the legacy metastore) can still be writable depending on how the environment is set up. For interactive work, write to hive_metastore instead of the federated Glue catalog object.
โข Double-check that your target location isnโt explicitly marked read-only.
Thatโs a different enforcement path, but itโs worth ruling out if permissions were recently tightened.
Why this matters long-term
Lakehouse Federationโs contract is clear: foreign tables are read-only from UCโs perspective. If your production job is slipping through on legacy or fallback behavior, treat that as a flag to revisit the pattern. A safer long-term approach is to stage data into UC-native storage and then load it into the remote system using that systemโs writer or connector. It keeps you out of the line of fire as runtimes continue to tighten enforcement.
Hope this helps, Louis.
Thursday
Dear @Louis_Frolio, thankyou for the detailed answer. It certainly cleared things out for me. Totally agree with the suggestion you provided for the long term stability of the production job and I will surely be implementing that.
About the solutions you provided for the quick test in an interactive environment, I guess testing on a job cluster is the best one to try first. I even tried doing it myself earlier too, but, I could not figure out the way to run a notebook interactively with a job cluster. Job cluster does not appear in the cluster's selection dropdown in the notebook. Would be really great if you could guide me to any resource that explain how can we run notebooks with the job cluster?
Thankyou!
Thursday
@Fatimah-Tariq , that is right, you cannot use job clusters interactively. You would need to run a small workflow job. Cheers, Lou.
Thursday
This error is expected when writing through a foreign catalog (Lakehouse Federation): foreign tables are readโonly, and Unity Catalog will only vend READ credentials for them, which surfaces as โOnly READ credentials can be retrieved for foreign tables.โ
On DBR 16.4 your interactive cluster is enforcing that policy; the older job on DBR 16.2 likely avoids the check (for example by writing to a nonโauthorized path or via legacy/nonโUC credentials), so it appears to succeed.
To test interactively, either write to a managed UC or nonโforeign external schema (recommended), or materialize from the foreign table into UC using a materialized view and operate on the copy; if you must write to the same storage path thatโs covered by a foreign catalog, remove the path from the foreign catalogโs authorized paths or enable the external locationโs โfallback modeโ to allow instanceโprofile writes (with care, since it bypasses UC write governance).
yesterday
Thank you @Louis_Frolio! your suggestions really helped me understanding the scenario.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now