<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: how to access snapshots in iceberg tables? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/151706#M53689</link>
    <description>&lt;P&gt;Greetings&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/220160"&gt;@gaurang033&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You're reading the Iceberg docs correctly. In a vanilla Iceberg-on-Spark setup, metadata tables like snapshots, history, and files are queryable like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT * FROM prod.db.table.snapshots;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Your query follows that pattern exactly. The error you're getting:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;raw.landing.emp_ice.snapshots is not a valid identifier as it has more than 2 name parts. SQLSTATE: 42601&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;is the SQL engine treating that as a standard four-part identifier -- not as a reference to the Iceberg snapshots metadata table. It sees four name parts and rejects them.&lt;/P&gt;
&lt;P&gt;Worth flagging: I haven't found documentation that confirms querying Iceberg metadata tables this way is supported in Databricks SQL today. The error behavior and the gap in the docs point in the same direction -- this likely isn't supported in this environment right now.&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;Here's what you can do today.&lt;/P&gt;
&lt;P&gt;Option 1: Try DESCRIBE HISTORY first&lt;/P&gt;
&lt;P&gt;Run:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;DESCRIBE HISTORY raw.landing.emp_ice;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;If it works on your Iceberg table, you'll get per-operation metadata -- timestamp, operation type, user, etc. -- similar to Delta table history. From there, time-travel works like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT *
FROM raw.landing.emp_ice VERSION AS OF &amp;lt;version_number&amp;gt;;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;where version_number comes from the version column in that history output.&lt;/P&gt;
&lt;P&gt;One caveat: I haven't found docs that explicitly guarantee DESCRIBE HISTORY works for Iceberg tables across all Databricks runtimes, so test it in your workspace first. If you get a "not supported for Iceberg" error, this path isn't available for that table type in your environment.&lt;/P&gt;
&lt;P&gt;Option 2: Use an external Iceberg client via Unity Catalog's Iceberg REST Catalog&lt;/P&gt;
&lt;P&gt;If you need the full Iceberg metadata tables -- snapshots, files, manifests, all of it -- you can reach them through an external Iceberg client configured against Unity Catalog's Iceberg REST Catalog endpoint.&lt;/P&gt;
&lt;P&gt;Once that's set up, standard Iceberg metadata queries work from the external Spark session:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.snapshots;
SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.history;
SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.files;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;You'll need Unity Catalog enabled and an Iceberg-capable client configured with spark.sql.catalog.&amp;lt;name&amp;gt;.type=rest pointing at the REST endpoint. This is more of a platform-level integration than a quick SQL change in the Databricks UI, but it gets you full access to the Iceberg metadata table experience.&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;Short version:&lt;/P&gt;
&lt;P&gt;Your query follows upstream Iceberg spec, but the Databricks SQL engine isn't interpreting emp_ice.snapshots as a metadata table reference -- hence the error.&lt;/P&gt;
&lt;P&gt;Start with DESCRIBE HISTORY and see if that covers your use case. If you need the full Iceberg metadata tables, the current path is an external Iceberg client pointed at the Unity Catalog REST Catalog endpoint.&lt;/P&gt;
&lt;P&gt;If DESCRIBE HISTORY fails or behaves unexpectedly, grab the runtime version and full error message and take that to Databricks Support. Based on what I'm seeing in the docs and the error you're hitting, I don't have enough to claim there's another route beyond what's laid out here.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hope this helps, Louis.&lt;/P&gt;</description>
    <pubDate>Mon, 23 Mar 2026 11:26:47 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2026-03-23T11:26:47Z</dc:date>
    <item>
      <title>how to access snapshots in iceberg tables?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/150999#M53557</link>
      <description>&lt;P&gt;I have created an iceberg tables in databricks, and inserted bunch of values in it.&amp;nbsp;&lt;/P&gt;&lt;P&gt;how do I list the snapshot and other metadata of the tables.&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="java"&gt;create table raw.landing.emp_ice(id int, name string ) using iceberg&lt;/LI-CODE&gt;&lt;P&gt;following doesn't work&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://iceberg.apache.org/docs/latest/spark-queries/#snapshots" target="_blank"&gt;https://iceberg.apache.org/docs/latest/spark-queries/#snapshots&lt;/A&gt;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;select * from raw.landing.emp_ice.snapshots;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;STRONG&gt;Error&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;`raw`.`landing`.`emp_ice`.`snapshots` is not a valid identifier as it has more than 2 name parts. SQLSTATE: 42601&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 15 Mar 2026 23:43:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/150999#M53557</guid>
      <dc:creator>gaurang033</dc:creator>
      <dc:date>2026-03-15T23:43:49Z</dc:date>
    </item>
    <item>
      <title>Re: how to access snapshots in iceberg tables?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/151004#M53559</link>
      <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/220160"&gt;@gaurang033&lt;/a&gt;&amp;nbsp;, &lt;SPAN class=""&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN class=""&gt;I'm not an Iceberg expert, but I did some research and tests and I think I can point you in the right direction. &lt;SPAN class=""&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;The error you're seeing (&lt;/SPAN&gt;&lt;SPAN class=""&gt;more&lt;/SPAN&gt; &lt;SPAN class=""&gt;than&lt;/SPAN&gt; &lt;SPAN class=""&gt;2&lt;/SPAN&gt; &lt;SPAN class=""&gt;name&lt;/SPAN&gt; &lt;SPAN class=""&gt;parts.&lt;/SPAN&gt; &lt;SPAN class=""&gt;SQLSTATE:&lt;/SPAN&gt; &lt;SPAN class=""&gt;42601&lt;/SPAN&gt;&lt;SPAN class=""&gt;) happens because, I suppose,&amp;nbsp;&lt;STRONG&gt;Databricks&lt;/STRONG&gt; &lt;STRONG&gt;SQL&lt;/STRONG&gt; &lt;STRONG&gt;does&lt;/STRONG&gt; &lt;STRONG&gt;not&lt;/STRONG&gt; &lt;STRONG&gt;support&lt;/STRONG&gt; &lt;STRONG&gt;4-part&lt;/STRONG&gt; &lt;STRONG&gt;identifiers&lt;/STRONG&gt; in the &lt;/SPAN&gt;&lt;SPAN class=""&gt;FROM&lt;/SPAN&gt;&lt;SPAN class=""&gt; clause — so &lt;/SPAN&gt;&lt;SPAN class=""&gt;catalog.schema.table.snapshots&lt;/SPAN&gt;&lt;SPAN class=""&gt; gets rejected&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;by the SQL parser.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;But there's also a deeper reason: in Databricks, &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;CREATE&lt;/SPAN&gt; &lt;SPAN class=""&gt;TABLE&lt;/SPAN&gt; &lt;SPAN class=""&gt;...&lt;/SPAN&gt; &lt;SPAN class=""&gt;USING&lt;/SPAN&gt;&lt;/STRONG&gt; &lt;SPAN class=""&gt;iceberg&lt;/SPAN&gt;&lt;SPAN class=""&gt; doesn't always create a true native Iceberg table. The behavior depends on whether &lt;STRONG&gt;Predictive&lt;/STRONG&gt; &lt;STRONG&gt;Optimization&lt;/STRONG&gt; is enabled on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;your workspace:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;- &lt;STRONG&gt;Without&lt;/STRONG&gt; &lt;STRONG&gt;Predictive&lt;/STRONG&gt; &lt;STRONG&gt;Optimization&lt;/STRONG&gt; → Databricks creates a &lt;STRONG&gt;Delta&lt;/STRONG&gt; &lt;STRONG&gt;table&lt;/STRONG&gt; &lt;STRONG&gt;with&lt;/STRONG&gt; &lt;STRONG&gt;Iceberg&lt;/STRONG&gt; &lt;STRONG&gt;UniForm&lt;/STRONG&gt; (Iceberg metadata is generated asynchronously on top of Delta). In this case, use &lt;/SPAN&gt;&lt;SPAN class=""&gt;DESCRIBE&lt;/SPAN&gt; &lt;SPAN class=""&gt;HISTORY&lt;/SPAN&gt;&lt;SPAN class=""&gt; to inspect&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;snapshots:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;DESCRIBE&lt;/SPAN&gt;&lt;SPAN class=""&gt; HISTORY raw.landing.emp_ice;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;- &lt;STRONG&gt;With&lt;/STRONG&gt; &lt;STRONG&gt;Predictive&lt;/STRONG&gt; &lt;STRONG&gt;Optimization&lt;/STRONG&gt; → Databricks creates a &lt;STRONG&gt;native&lt;/STRONG&gt; &lt;STRONG&gt;managed&lt;/STRONG&gt; &lt;STRONG&gt;Iceberg&lt;/STRONG&gt; &lt;STRONG&gt;table&lt;/STRONG&gt;. In this case, the &lt;/SPAN&gt;&lt;SPAN class=""&gt;.snapshots&lt;/SPAN&gt;&lt;SPAN class=""&gt; syntax should work, but only via &lt;STRONG&gt;PySpark&lt;/STRONG&gt; (not Databricks SQL):&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;spark.sql(&lt;/SPAN&gt;&lt;SPAN class=""&gt;"SELECT&lt;/SPAN&gt; &lt;SPAN class=""&gt;*&lt;/SPAN&gt; &lt;SPAN class=""&gt;FROM&lt;/SPAN&gt; &lt;SPAN class=""&gt;raw.landing.emp_ice.snapshots"&lt;/SPAN&gt;&lt;SPAN class=""&gt;).display()&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;To check which type of table you actually have, run:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;DESCRIBE&lt;/SPAN&gt;&lt;SPAN class=""&gt; EXTENDED raw.landing.emp_ice;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;- If you see a &lt;/SPAN&gt;&lt;SPAN class=""&gt;Delta&lt;/SPAN&gt; &lt;SPAN class=""&gt;Uniform&lt;/SPAN&gt; &lt;SPAN class=""&gt;Iceberg&lt;/SPAN&gt;&lt;SPAN class=""&gt; section → it's a Delta+UniForm table → use &lt;/SPAN&gt;&lt;SPAN class=""&gt;DESCRIBE&lt;/SPAN&gt; &lt;SPAN class=""&gt;HISTORY&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;- If you see &lt;/SPAN&gt;&lt;SPAN class=""&gt;Provider:&lt;/SPAN&gt; &lt;SPAN class=""&gt;iceberg&lt;/SPAN&gt;&lt;SPAN class=""&gt; without Delta references → it's a native Iceberg table → use PySpark&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;You can also check the Iceberg metadata generation status with:&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;SHOW&lt;/SPAN&gt;&lt;SPAN class=""&gt; TBLPROPERTIES raw.landing.emp_ice;&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;and look for &lt;/SPAN&gt;&lt;SPAN class=""&gt;converted_delta_version&lt;/SPAN&gt;&lt;SPAN class=""&gt; and &lt;/SPAN&gt;&lt;SPAN class=""&gt;converted_delta_timestamp&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Hope this helps! If you found my answer useful, feel free to give me a &lt;STRONG&gt;Kudo&lt;/STRONG&gt; &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2026 03:10:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/151004#M53559</guid>
      <dc:creator>Ale_Armillotta</dc:creator>
      <dc:date>2026-03-16T03:10:04Z</dc:date>
    </item>
    <item>
      <title>Re: how to access snapshots in iceberg tables?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/151706#M53689</link>
      <description>&lt;P&gt;Greetings&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/220160"&gt;@gaurang033&lt;/a&gt;&amp;nbsp;,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You're reading the Iceberg docs correctly. In a vanilla Iceberg-on-Spark setup, metadata tables like snapshots, history, and files are queryable like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT * FROM prod.db.table.snapshots;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Your query follows that pattern exactly. The error you're getting:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;raw.landing.emp_ice.snapshots is not a valid identifier as it has more than 2 name parts. SQLSTATE: 42601&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;is the SQL engine treating that as a standard four-part identifier -- not as a reference to the Iceberg snapshots metadata table. It sees four name parts and rejects them.&lt;/P&gt;
&lt;P&gt;Worth flagging: I haven't found documentation that confirms querying Iceberg metadata tables this way is supported in Databricks SQL today. The error behavior and the gap in the docs point in the same direction -- this likely isn't supported in this environment right now.&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;Here's what you can do today.&lt;/P&gt;
&lt;P&gt;Option 1: Try DESCRIBE HISTORY first&lt;/P&gt;
&lt;P&gt;Run:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;DESCRIBE HISTORY raw.landing.emp_ice;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;If it works on your Iceberg table, you'll get per-operation metadata -- timestamp, operation type, user, etc. -- similar to Delta table history. From there, time-travel works like this:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT *
FROM raw.landing.emp_ice VERSION AS OF &amp;lt;version_number&amp;gt;;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;where version_number comes from the version column in that history output.&lt;/P&gt;
&lt;P&gt;One caveat: I haven't found docs that explicitly guarantee DESCRIBE HISTORY works for Iceberg tables across all Databricks runtimes, so test it in your workspace first. If you get a "not supported for Iceberg" error, this path isn't available for that table type in your environment.&lt;/P&gt;
&lt;P&gt;Option 2: Use an external Iceberg client via Unity Catalog's Iceberg REST Catalog&lt;/P&gt;
&lt;P&gt;If you need the full Iceberg metadata tables -- snapshots, files, manifests, all of it -- you can reach them through an external Iceberg client configured against Unity Catalog's Iceberg REST Catalog endpoint.&lt;/P&gt;
&lt;P&gt;Once that's set up, standard Iceberg metadata queries work from the external Spark session:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.snapshots;
SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.history;
SELECT * FROM &amp;lt;uc_catalog&amp;gt;.&amp;lt;schema&amp;gt;.emp_ice.files;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;You'll need Unity Catalog enabled and an Iceberg-capable client configured with spark.sql.catalog.&amp;lt;name&amp;gt;.type=rest pointing at the REST endpoint. This is more of a platform-level integration than a quick SQL change in the Databricks UI, but it gets you full access to the Iceberg metadata table experience.&lt;/P&gt;
&lt;HR /&gt;
&lt;P&gt;Short version:&lt;/P&gt;
&lt;P&gt;Your query follows upstream Iceberg spec, but the Databricks SQL engine isn't interpreting emp_ice.snapshots as a metadata table reference -- hence the error.&lt;/P&gt;
&lt;P&gt;Start with DESCRIBE HISTORY and see if that covers your use case. If you need the full Iceberg metadata tables, the current path is an external Iceberg client pointed at the Unity Catalog REST Catalog endpoint.&lt;/P&gt;
&lt;P&gt;If DESCRIBE HISTORY fails or behaves unexpectedly, grab the runtime version and full error message and take that to Databricks Support. Based on what I'm seeing in the docs and the error you're hitting, I don't have enough to claim there's another route beyond what's laid out here.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hope this helps, Louis.&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2026 11:26:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-access-snapshots-in-iceberg-tables/m-p/151706#M53689</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2026-03-23T11:26:47Z</dc:date>
    </item>
  </channel>
</rss>

