Databricks Unity Catalog (UC) is the industry’s only unified and open governance solution for data and AI, built into the Databricks Data Intelligence Platform. Unity Catalog provides a single source of truth for your organization’s data and AI assets, providing open connectivity to any data source, any format, unified governance with detailed lineage tracking, comprehensive monitoring, and support for open sharing and collaboration.
With open APIs and credential vending, Unity Catalog enables external engines such as Trino, DuckDB, Apache Spark™, Daft, and other Iceberg REST catalog-integrated engines such as Dremio to access its governed data. This interoperability minimizes data duplication, allowing organizations to use a single data copy across different analytics and AI workloads with unified governance. In particular, a common pattern for our customers is to use Databricks’ best-in-class ETL price/performance for upstream data, and then access it from a distributed SQL engine, such as Trino.
In this blog post, we will cover why the Iceberg REST Catalog is useful and walk through an example of how to read Unity Catalog Iceberg tables from OSS Trino.
Iceberg REST API Catalog Integration
Apache Iceberg™ maintains atomicity and consistency by creating new metadata files for each table change. The Iceberg catalog tracks the new metadata per write and ensures that incomplete writes do not corrupt an existing metadata file.
The Iceberg REST catalog API is a standardized, open API specification that provides a unified interface for Iceberg catalogs. It decouples catalog implementations from clients and solves interoperability across engines and catalogs.
Unity Catalog (UC) implements the Iceberg REST catalog interface – enabling interoperability with any engine integrated with the Iceberg REST Catalog such as Apache Spark™, Trino, Dremio, and Snowflake. Unity Catalog’s Iceberg REST Catalog endpoints allow external systems to access tables via open APIs while also benefiting from performance enhancements like Liquid Clustering and Predictive Optimization, while Databricks workloads continue to benefit from advanced Unity Catalog features like Change Data Feed.
Securing Access via Credential Vending
Unity Catalog’s credential vending dynamically issues temporary credentials for secure access to cloud storage. When an external engine, such as Trino, requests data from an Iceberg table registered in a UC metastore, Unity Catalog generates short-lived credentials and storage URLs based on the user's IAM roles or managed identities. This eliminates manual credential management while maintaining security and compliance. The detailed steps are captured in the diagram below.

Experiencing Trino in Action with Unity Catalog’s Open APIs
In this section, we’ll look at how you can access the Iceberg tables registered in Databricks Unity Catalog using Trino. We’ll walk through the following steps:
- Setting up the Unity Catalog Iceberg tables from the Databricks workspace
- Setting up OSS Trino on the local workstation
- Configuring OSS Trino to read Databricks Unity Catalog Iceberg tables
- Reading Databricks Unity Catalog Iceberg tables from the local Trino terminal
- Performing UC access control test
Step 1: Setting up the Unity Catalog Iceberg tables from the Databricks workspace
The blog assumes the Unity Catalog enabled Workspace setup and Account principles configured with proper authorization and authentication. To get started with Unity Catalog, follow the Unity Catalog Setup Guide.
Personal Access Tokens (PATs) are essential for authenticating API requests when integrating external tools or automating workflows in Databricks. To create a PAT, follow the Databricks PAT Setup Guide. Log in to your Databricks workspace, navigate to "User Settings," and generate a token with a specific lifespan and permissions. Save the token securely, as it cannot be retrieved later.
Databricks enables access to Unity Catalog tables through the Unity REST API and the Iceberg REST catalog, offering seamless integration with external systems. For more details, refer to Access Databricks data using external systems. To facilitate external data access, a metastore administrator can enable the capability for each metastore that requires external connectivity. Additionally, the user or service principal configuring the connection must possess the EXTERNAL USE SCHEMA privilege for every schema containing tables intended for external reads.
We will use the following Unity Catalog SQL commands from our Databricks workspace to create a catalog, schema, and a managed Iceberg table with records and to grant permissions to the principal associated with the PAT token.
Note: We used TPCH sample datasets available in Databricks samples catalog for this example.
The Databricks Principal used in this example was given all the necessary UC permissions (such as Use Catalog, Use Schema, External Use Schema, and Create table) to perform the activities.
CREATE CATALOG databricks_demo;
CREATE SCHEMA databricks_demo.trino_demo;
GRANT EXTERNAL USE SCHEMA ON SCHEMA databricks_demo.trino_demo TO `<<Your Databricks Principal>>`
USE CATALOG databricks_demo;
USE SCHEMA trino_demo;
create table customer deep clone samples.tpch.customer;
create table lineitem deep clone samples.tpch.lineitem;
create table nation deep clone samples.tpch.nation;
create table orders deep clone samples.tpch.orders;
create table part deep clone samples.tpch.part;
create table partsupp deep clone samples.tpch.partsupp;
create table region deep clone samples.tpch.region;
create table supplier deep clone samples.tpch.supplier;
ALTER TABLE customer SET TBLPROPERTIES (
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5',
'delta.columnMapping.mode' = 'name',
'delta.enableIcebergCompatV2' = 'true',
'delta.universalFormat.enabledFormats' = 'iceberg'
);
-- Rest of the tables were also altered to make them compatible with Iceberg reads.
|
Below screen shot shows the details of the UC Iceberg table.

Step 2: Setting up OSS Trino on the local workstation
To integrate Unity Catalog with open-source Trino (OSS Trino), begin by setting up a Trino Docker container. Follow the Trino Docker container setup guidelines and
execute the following Docker command to create a container using the trinodb/trino image and name the container as "trino" for further reference. Run the container in detached mode to operate in the background, and map Trino’s default port 8080 inside the container to port 8080 on your local machine, ensuring seamless connectivity.
% docker run --name trino -d -p 8080:8080 trinodb/trino
|
Execute the following command to verify the status of the container running in the background.
% docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
68fcc2160df3 trinodb/trino "/usr/lib/trino/bin/..." 4 days ago Up 21 seconds (healthy) 0.0.0.0:8080->8080/tcp trino
|
Below screenshot highlights the docker desktop based trino server logs.

The Trino image comes with the Trino CLI for executing SQL commands. Use the following command to establish an interactive connection with the Trino container:
% docker exec -it trino trino
trino>
|
Step 3: Configuring OSS Trino to read Databricks Unity Catalog Iceberg tables
OSS Trino integrates seamlessly with external catalogs supporting Iceberg REST APIs, such as Databricks Unity Catalog (UC). It provides two primary methods for managing catalogs:
- dynamic catalog management and
- file-based properties
Each approach has distinct advantages depending on flexibility, automation, and configuration management needs. The following sections provide an overview of each method.
Trino's Dynamic Catalog Management allows users to create, update, and remove catalogs at runtime via SQL commands, eliminating the need for static file modifications or server restarts. This approach enables seamless catalog registration by specifying configurations in real-time. Let’s create a new container with name “trino_dynamic” and enable dynamic catalog management by setting the CATALOG_MANAGEMENT environmental variable to dynamic. Connect to trino terminal interactively and execute “CREATE CATALOG” command using iceberg connector.
% docker run --name trino_dynamic -d -p 8080:8080 -e CATALOG_MANAGEMENT=dynamic trinodb/trino
|
% docker exec -it trino_dynamic trino
trino> CREATE CATALOG DATABRICKS_DEMO USING iceberg
-> with (
-> "iceberg.catalog.type" = 'rest',
-> "iceberg.rest-catalog.uri" = 'https://<workspace_url>/api/2.1/unity-catalog/iceberg',
-> "iceberg.rest-catalog.warehouse" = '<<Your UC Catalog>>',
-> "iceberg.rest-catalog.security" = 'OAUTH2',
-> "iceberg.rest-catalog.oauth2.token" = '<<PAT token of your Databricks Identities>>',
-> "iceberg.rest-catalog.vended-credentials-enabled" = 'true',
-> "fs.native-s3.enabled" = 'true',
-> "s3.region" = 'us-west-2'
-> );
|
Note the following items in this command:
- Trino Iceberg Connector
- Iceberg REST catalog
- iceberg.rest-catalog.uri points to the Databricks UC REST API endpoint for the workspace.
- iceberg.rest-catalog.warehouse=<<UC Catalog name>> required to access from the trino.
- iceberg.rest-catalog.oauth2.token is your Databricks workspace personal access token (PAT), which is used to authenticate you as a legitimate user to the Databricks platform. Access to the UC objects is controlled via the UC permissions model.
- S3 file system support
- fs.native-s3.enabled must be set to ‘true’ to access your cloud object storage (in this example, AWS S3).
- S3.region, required region name for S3.
Below command shows the registered UC Iceberg catalog.
trino> show catalogs;
Catalog
-----------------
databricks_demo
jmx
memory
system
tpcds
tpch
(6 rows)
|
We can verify the creation of catalog property files by accessing the Docker container through an interactive Bash terminal.
%docker exec -it trino_dynamic /bin/bash
[trino@8e32cf4b9e75 /]$ ls -lrta /etc/trino/catalog/
total 32
-rw-r--r-- 1 trino trino 43 Dec 17 22:11 tpch.properties
-rw-r--r-- 1 trino trino 45 Dec 17 22:11 tpcds.properties
-rw-r--r-- 1 trino trino 22 Dec 17 22:11 memory.properties
-rw-r--r-- 1 trino trino 19 Dec 17 22:11 jmx.properties
drwxr-xr-x 1 trino trino 4096 Dec 17 22:11 ..
-rw-r--r-- 1 trino trino 436 Jan 26 23:14 databricks_demo.properties
drwxr-xr-x 1 trino trino 4096 Jan 26 23:14 .
[trino@8e32cf4b9e75 /]$ cat /etc/trino/catalog/databricks_demo.properties
#Sun Jan 26 23:14:25 GMT 2025
connector.name=iceberg
fs.native-s3.enabled=true
iceberg.catalog.type=rest
iceberg.rest-catalog.oauth2.token=<<PAT token of your Databricks Principal>>
iceberg.rest-catalog.security=OAUTH2
iceberg.rest-catalog.uri=https://<<Your Workspace Url>>/api/2.1/unity-catalog/iceberg
iceberg.rest-catalog.vended-credentials-enabled=true
iceberg.rest-catalog.warehouse=<<Your UC Catalog>>
s3.region=<<Your AWS S3 Region>>
|
Alternatively, with File-Based Catalog Configuration, catalogs can be set up when starting the container by mounting a local directory containing Iceberg REST catalog property files to /etc/trino in the container. Each property file represents a catalog and appears alongside Trino’s default catalogs. Since these configurations are statically defined, any updates require a server restart.
To test the mount-based configuration, use the "databricks_demo.properties" catalog properties file and start a new Docker container named "trino_mounted_configs" using the command below.
# etc/trino/catalog/databricks_demo.properties
connector.name=iceberg
iceberg.catalog.type=rest
iceberg.rest-catalog.uri=https://<databricks_workspace_url>/api/2.1/unity-catalog/iceberg
iceberg.rest-catalog.warehouse=databricks_demo
iceberg.rest-catalog.security=OAUTH2
iceberg.rest-catalog.oauth2.token=<databricks_principle_pat_token>
iceberg.rest-catalog.vended-credentials-enabled=true
fs.native-s3.enabled=true
s3.region=us-west-2
|
% docker run --name trino_mounted_configs -d -p 8080:8080 --volume $PWD/etc:/etc/trino trinodb/trino
% docker exec -it trino_mounted_configs /bin/bash
[trino@8e32cf4b9e75 /]$ ls -lrta /etc/trino/catalog/
total 28
-rw-r--r-- 1 trino trino 43 Dec 17 22:11 tpch.properties
-rw-r--r-- 1 trino trino 45 Dec 17 22:11 tpcds.properties
-rw-r--r-- 1 trino trino 22 Dec 17 22:11 memory.properties
-rw-r--r-- 1 trino trino 19 Dec 17 22:11 jmx.properties
-rw-r--r-- 1 trino trino 19 Dec 17 22:11 databricks_demo.properties
drwxr-xr-x 1 trino trino 4096 Dec 17 22:11 ..
drwxr-xr-x 1 trino trino 4096 Jan 26 23:07 .
|
Next, open an interactive Trino terminal and run Trino SQL commands to query the list of catalogs and schemas.
docker exec -it trino_mounted_configs trino
|
trino> show catalogs;
Catalog
-----------------
databricks_demo
jmx
memory
system
tpcds
tpch
(6 rows)
trino> show schemas in databricks_demo;
Schema
--------------------
default
information_schema
trino_demo
(4 rows)
Query 20250129_161926_00003_qbxv6, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
1.57 [4 rows, 69B] [2 rows/s, 44B/s]
|
Step 4: Reading Databricks Unity Catalog Iceberg tables from the local Trino terminal
Now that Trino is configured to read tables from Databicks UC catalog, you can use the Trino terminal to perform SQL queries. We will use standard ANSI SQL queries to retrieve data, perform aggregations, or join tables.
trino> use databricks_demo.trino_demo;
USE
trino:trino_demo> show tables in databricks_demo.trino_demo;
Table
-------------
customer
db_uc_table
lineitem
nation
orders
part
partsupp
region
supplier
(9 rows)
Query 20250129_162049_00007_qbxv6, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.72 [9 rows, 245B] [12 rows/s, 342B/s]
|
--Query 1
trino:trino_demo> SELECT
-> l_returnflag,
-> l_linestatus,
-> SUM(l_quantity) AS sum_qty,
-> SUM(l_extendedprice) AS sum_base_price,
-> SUM(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
-> SUM(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
-> AVG(l_quantity) AS avg_qty,
-> AVG(l_extendedprice) AS avg_price,
-> AVG(l_discount) AS avg_disc,
-> COUNT(*) AS count_order
-> FROM
-> lineitem
-> WHERE
-> l_shipdate <= DATE '1998-12-01' - INTERVAL '90' DAY
-> GROUP BY
-> l_returnflag,
-> l_linestatus
-> ORDER BY
-> l_returnflag,
-> l_linestatus;
->
l_returnflag | l_linestatus | sum_qty | sum_base_price | sum_disc_price | sum_charge | avg_qty | avg_price | avg_disc | count_order
--------------+--------------+--------------+-----------------+-------------------+---------------------+---------+-----------+----------+-------------
A | F | 188818373.00 | 283107483036.12 | 268952035589.0630 | 279714361804.228122 | 25.50 | 38237.67 | 0.05 | 7403889
N | F | 4913382.00 | 7364213967.95 | 6995782725.6633 | 7275821143.989585 | 25.53 | 38267.78 | 0.05 | 192439
N | O | 371626663.00 | 557251817321.64 | 529391298998.6177 | 550573649707.749900 | 25.50 | 38233.71 | 0.05 | 14574883
R | F | 188960009.00 | 283310887148.20 | 269147687267.2029 | 279912972474.864338 | 25.51 | 38252.41 | 0.05 | 7406353
(4 rows)
Query 20250129_162926_00012_qbxv6, FINISHED, 1 node
Splits: 60 total, 60 done (100.00%)
36.83 [30M rows, 253MiB] [815K rows/s, 6.87MiB/s]
--Query 2
trino:trino_demo> SELECT
-> l_orderkey,
-> SUM(l_extendedprice * (1 - l_discount)) AS revenue,
-> o_orderdate,
-> o_shippriority
-> FROM
-> customer c
-> JOIN orders o ON c.c_custkey = o.o_custkey
-> JOIN lineitem l ON l.l_orderkey = o.o_orderkey
-> WHERE
-> c.c_mktsegment = 'BUILDING'
-> AND o.o_orderdate < DATE '1995-03-15'
-> AND l.l_shipdate > DATE '1995-03-15'
-> GROUP BY
-> l_orderkey,
-> o_orderdate,
-> o_shippriority
-> ORDER BY
-> revenue DESC,
-> o_orderdate
-> LIMIT 20;
l_orderkey | revenue | o_orderdate | o_shippriority
------------+-------------+-------------+----------------
18869634 | 541426.1669 | 1995-01-10 | 0
2845094 | 450279.8097 | 1995-03-06 | 0
16716836 | 432402.2306 | 1995-01-28 | 0
25345699 | 431791.2769 | 1995-02-15 | 0
568514 | 423439.7864 | 1995-02-18 | 0
9844418 | 413413.2048 | 1995-02-22 | 0
12783202 | 412259.8661 | 1995-03-07 | 0
25342753 | 411832.8838 | 1995-01-07 | 0
4011108 | 398799.3850 | 1995-03-10 | 0
28708865 | 398227.3287 | 1995-02-15 | 0
1000004 | 397918.5426 | 1995-03-02 | 0
10725381 | 397623.7508 | 1995-03-12 | 0
4860004 | 394207.2591 | 1995-02-22 | 0
16002339 | 393607.8484 | 1995-02-06 | 0
1083941 | 392686.9967 | 1995-02-21 | 0
24062117 | 391641.4971 | 1995-02-08 | 0
2529826 | 389595.2070 | 1995-02-17 | 0
24392391 | 388391.8549 | 1995-02-23 | 0
14444676 | 387978.3972 | 1995-03-11 | 0
12935522 | 386509.0814 | 1995-02-16 | 0
(20 rows)
Query 20250129_163353_00022_qbxv6, FINISHED, 1 node
Splits: 129 total, 129 done (100.00%)
57.95 [38.4M rows, 334MiB] [663K rows/s, 5.76MiB/s]
--Query 3
trino:trino_demo> SELECT
-> n_name,
-> SUM(l_extendedprice * (1 - l_discount)) AS revenue
-> FROM
-> customer c
-> JOIN orders o ON c.c_custkey = o.o_custkey
-> JOIN lineitem l ON l.l_orderkey = o.o_orderkey
-> JOIN supplier s ON l.l_suppkey = s.s_suppkey
-> JOIN nation n ON s.s_nationkey = n.n_nationkey
-> JOIN region r ON n.n_regionkey = r.r_regionkey
-> WHERE
-> r.r_name = 'ASIA'
-> AND o.o_orderdate >= DATE '1994-01-01'
-> AND o.o_orderdate < DATE '1994-01-01' + INTERVAL '1' YEAR
-> GROUP BY
-> n_name
-> ORDER BY
-> revenue DESC;
n_name | revenue
-----------+-----------------
INDIA | 6827422589.6332
INDONESIA | 6732790280.2529
VIETNAM | 6586005258.2752
CHINA | 6563302520.0291
JAPAN | 6538607393.2003
(5 rows)
Query 20250129_163650_00003_retxd, FINISHED, 1 node
Splits: 183 total, 183 done (100.00%)
58.73 [39M rows, 355MiB] [665K rows/s, 6.04MiB/s]
|
Currently, the UC Iceberg APIs do not support write operations. As a result, Data Manipulation Language (DML) and Data Definition Language (DDL) commands cannot be executed through the Trino terminal on tables configured within the UC catalog. For more information, please consult the limitations outlined for Universal Format (UniForm).
The following steps demonstrate the changes in column values queried via the Trino terminal, both before and after performing DML and DDL operations on the tables using the Databricks workspace.
Before a DML Change is performed from the Databricks workspace -
trino:trino_demo> select c_address from databricks_demo.trino_demo.customer
-> where c_custkey = 412446;
c_address
-------------------------------------
5u8MSbyiC7J,7PuY4Ivaq1JRbTCMKeNVqg
(1 row)
Query 20250129_185853_00019_fazpx, FINISHED, 1 node
Splits: 1 total, 1 done (100.00%)
3.70 [750K rows, 8.81MiB] [203K rows/s, 2.38MiB/s]
|
A DML statement is executed successfully from the Databricks workspace to update the customer address -
update customer
set c_address = 'New customer secondary address for customer # 412446'
where c_custkey = 412446;
|
Change is reflected when queried from the Trino terminal -
trino:trino_demo> select c_custkey, c_address from databricks_demo.trino_demo.customer
-> where c_custkey = 412446;
c_custkey | c_address
-----------+------------------------------------------------------
412446 | New customer secondary address for customer # 412446
(1 row)
Query 20250129_190042_00022_fazpx, FINISHED, 1 node
Splits: 1 total, 1 done (100.00%)
3.26 [750K rows, 8.81MiB] [230K rows/s, 2.7MiB/s]
|
Before a table definition (DDL) change is performed from the Databricks workspace -
trino> describe databricks_demo.trino_demo.region;
Column | Type | Extra | Comment
-------------+---------+-------+---------
r_regionkey | bigint | |
r_name | varchar | |
r_comment | varchar | |
(3 rows)
Query 20250129_190448_00001_yhmmw, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
1.01 [3 rows, 191B] [2 rows/s, 189B/s]
|
A DDL statement is executed successfully from the Databricks workspace to add a new column to the region table -
ALTER TABLE databricks_demo.trino_demo.region
ADD COLUMNS ( free_text STRING);
|
Table definition change is reflected when the region table is described from the Trino terminal -
trino> describe databricks_demo.trino_demo.region;
Column | Type | Extra | Comment
-------------+---------+-------+---------
r_regionkey | bigint | |
r_name | varchar | |
r_comment | varchar | |
free_text | varchar | |
(4 rows)
Query 20250129_190336_00000_yhmmw, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
2.76 [4 rows, 252B] [1 rows/s, 91B/s]
|
We also created an external iceberg table from the Databricks workspace and read it from the Trino terminal -
Databricks workspace -
drop table if exists iceberg_external;
create table if not exists iceberg_external(c1 int) location "s3://databricks-dkushari/iceberg-external/";
ALTER TABLE iceberg_external SET TBLPROPERTIES (
'delta.enableDeletionVectors' = 'false'
);
REORG TABLE iceberg_external APPLY (PURGE);
ALTER TABLE iceberg_external SET TBLPROPERTIES (
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5',
'delta.columnMapping.mode' = 'name',
'delta.enableIcebergCompatV2' = 'true',
'delta.universalFormat.enabledFormats' = 'iceberg'
);
insert into iceberg_external values (10),(20),(30);
select * from iceberg_external;
|
Trino Terminal -
trino:trino_demo> select * from iceberg_external;
c1
----
10
20
30
(3 rows)
Query 20250204_150705_00011_kah8v, FINISHED, 1 node
Splits: 1 total, 1 done (100.00%)
1.02 [3 rows, 900B] [2 rows/s, 887B/s]
|
Step 5: Performing UC access control test
Permissions in Unity Catalog (UC) play a critical role in controlling access to the data assets governed by UC. If permissions are revoked from a principal or user associated with the PAT token, Trino's ability to query the affected tables is immediately impacted. For example, revoking SELECT privilege on a table will result in query failure error messages from Trino. This highlights the importance of carefully managing permissions in Unity Catalog to balance security and operational efficiency.
The following illustrates the difference between having SELECT permission on the UC table and lacking the necessary permissions for the principal.

trino:trino_demo> select l_orderkey, l_linenumber, l_quantity from lineitem limit 1;
l_orderkey | l_linenumber | l_quantity
------------+--------------+------------
15997987 | 4 | 50.00
(1 row)
Query 20250129_175237_00016_fazpx, FINISHED, 1 node
Splits: 18 total, 18 done (100.00%)
5.83 [1 rows, 9.39MiB] [0 rows/s, 1.61MiB/s]
|
Let's remove SELECT permission on the lineitem table from the principal -

Now that the SELECT grant is missing on the table, we received the following error when projecting all the columns.
trino> select * from databricks_demo.trino_demo.lineitem limit 1;
Query 20250129_174628_00005_fazpx, FAILED, 1 node
Splits: 17 total, 0 done (0.00%)
2.03 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20250129_174628_00005_fazpx failed: io.trino.spi.TrinoException: Error processing metadata for table trino_demo.lineitem
|
We tried projecting a few columns and received the following error message.
trino:trino_demo> select l_orderkey, l_linenumber, l_quantity from lineitem limit 1;
Query 20250129_175041_00013_fazpx, FAILED, 1 node
Splits: 18 total, 0 done (0.00%)
2.55 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20250129_175041_00013_fazpx failed: Error opening Iceberg split s3://databricks-e2demofieldengwest/b169b504-4c54-49f2-bc3a-adf4b128f36d/tables/ee09b87c-2d02-412f-97a8-333c253ad1bf/part-00000-446a2a8d-3dfe-4ebb-ac3c-f5dc8f73b895.c000.snappy.parquet (offset=0, length=124267150): Failed to open S3 file: s3://databricks-e2demofieldengwest/b169b504-4c54-49f2-bc3a-adf4b128f36d/tables/ee09b87c-2d02-412f-97a8-333c253ad1bf/part-00000-446a2a8d-3dfe-4ebb-ac3c-f5dc8f73b895.c000.snappy.parquet
|
This is expected as we have removed the SELECT grant from the principal – demonstrating how UC credential vending simplifies governance and access control across Databricks and external engines accessing assets via Unity Catalog’s open APIs and Iceberg REST catalog interface.
Conclusion
This blog provided a step-by-step guide on how to leverage Docker-based OSS Trino to securely read from UniForm Iceberg tables registered in Databricks Unity Catalog, using the Iceberg REST API. By following this approach, you can seamlessly integrate Trino with Unity Catalog, enabling interoperability across data platforms while maintaining strong governance and security controls. This ensures that Iceberg tables remain accessible to external engines without compromising data consistency or compliance.
Try out Unity Catalog’s Iceberg REST APIs today to access your data securely from any external Iceberg REST API-compatible clients. Additionally, you can try performing CRUD operations on your Databricks Unity Catalog data from several external systems. For more details please refer here.
Check out the Databricks Community blog to learn how you can integrate Apache Spark™ with your Databricks Unity Catalog assets via open APIs.