<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Reading external Iceberg table in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/114524#M44855</link>
    <description>&lt;P&gt;Hi, I'm facing the same problem.&lt;/P&gt;&lt;P&gt;However, when set the access mode to "No isolation shared" I loose access to the external location where the Iceberg table resides. Is there a way to&amp;nbsp;force Spark to NOT use catalog even when in the "Standard (formerly Shared) access mode? I've tried setting the following option in the compute configuration:&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;spark.databricks.unityCatalog.enabled false&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;but that doesn't seem to make any difference, I'm now getting the familiar error:&lt;/P&gt;&lt;PRE&gt;NoSuchTableException: [TABLE_OR_VIEW_NOT_FOUND] The table or view ___ cannot be found. Verify the spelling and correctness of the schema and catalog.&lt;BR /&gt;If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.&lt;/PRE&gt;&lt;P&gt;which is off course correct as the Iceberg table isn't known to the catalog, but the problem is why does it have to be in catalog - can I not just read the Iceberg table data without having to register it in Catalog?&lt;/P&gt;</description>
    <pubDate>Fri, 04 Apr 2025 13:42:01 GMT</pubDate>
    <dc:creator>Sash</dc:creator>
    <dc:date>2025-04-04T13:42:01Z</dc:date>
    <item>
      <title>Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56452#M30559</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi all, I am trying to Read an external Iceberg table.&amp;nbsp; A separate spark sql script creates my Iceberg table and now i need to read the Iceberg tables(created outside of databricks) from my Databricks notebook. Could someone tell me the approach for that. I tried using spark.read.format("iceberg").load("s3://path to my Iceberg data folder") but getting error. Any help would be appreciated&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Jan 2024 22:42:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56452#M30559</guid>
      <dc:creator>Ambesh</dc:creator>
      <dc:date>2024-01-04T22:42:38Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56462#M30563</link>
      <description>&lt;P&gt;Have you installed the jar to be able to read iceberg?&lt;BR /&gt;&lt;A href="https://www.dremio.com/blog/getting-started-with-apache-iceberg-in-databricks/" target="_blank"&gt;https://www.dremio.com/blog/getting-started-with-apache-iceberg-in-databricks/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You can also try to use the Uniform format, if that is possible of course.&lt;BR /&gt;&lt;A href="https://docs.databricks.com/en/delta/uniform.html" target="_blank"&gt;https://docs.databricks.com/en/delta/uniform.html&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jan 2024 08:21:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56462#M30563</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2024-01-05T08:21:39Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56538#M30591</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/14792"&gt;@-werners-&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN&gt;I am using Databricks Runtime 10.4 ( Spark 3.2 ), so I have downloaded “iceberg-spark-runtime-3.2_2.12”&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Also the table exists in the S3 bkt.&amp;nbsp;&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;The error msg is:&amp;nbsp; java.util.NoSuchElementException: None.get&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I am also attaching a screenshot for reference.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jan 2024 18:27:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56538#M30591</guid>
      <dc:creator>Ambesh</dc:creator>
      <dc:date>2024-01-05T18:27:11Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56790#M30648</link>
      <description>&lt;P&gt;You also need to configure the cluster, according to the blog.&lt;BR /&gt;If that still does not work, can you try with a recent LTS release, like 13.3 f.e.?&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jan 2024 16:01:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56790#M30648</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2024-01-09T16:01:16Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56791#M30649</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/14792"&gt;@-werners-&lt;/a&gt;&amp;nbsp;the cluster was provisioned with all the requirements as stated in the doc. I also tried with runtime&amp;nbsp;&lt;SPAN&gt;13.2 and corresponding Iceberg Jar, this time only the error message changed(which is more informative now) but still Databricks is not able to read the Iceberg tables in S3 with catalog as Glue catalog. The error says:&amp;nbsp;&lt;SPAN class=""&gt;AnalysisException: &lt;/SPAN&gt;[&lt;A class="" href="https://docs.databricks.com/error-messages/error-classes.html#table_or_view_not_found" target="_blank" rel="noopener noreferrer"&gt;TABLE_OR_VIEW_NOT_FOUND&lt;/A&gt;]&amp;nbsp;as it is not able to read from the Glue catalog. I also provisioned the instance profile for access to Glue and S3 bucket&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jan 2024 16:07:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/56791#M30649</guid>
      <dc:creator>Ambesh</dc:creator>
      <dc:date>2024-01-09T16:07:53Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/57842#M30932</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/9"&gt;@Retired_mod&lt;/a&gt;&amp;nbsp;yes the iceberg table does not exist in the default catalog because its created externally(outside of Databricks) by a&amp;nbsp;&amp;nbsp;separate spark sql script. The catalog it uses is Glue catalog. The ques is how can i access that external iceberg table from with in my Databricks notebook&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 21:06:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/57842#M30932</guid>
      <dc:creator>Ambesh</dc:creator>
      <dc:date>2024-01-18T21:06:49Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/83690#M36986</link>
      <description>&lt;P&gt;HI&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97529"&gt;@Ambesh&lt;/a&gt;&amp;nbsp;did you solve this eventually?&amp;nbsp; I am getting the same error&amp;nbsp;&lt;SPAN class=""&gt;AnalysisException:&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;A class="" href="https://docs.databricks.com/error-messages/error-classes.html#table_or_view_not_found" target="_blank" rel="noopener noreferrer nofollow"&gt;TABLE_OR_VIEW_NOT_FOUND&lt;/A&gt;&lt;SPAN&gt;]&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 20 Aug 2024 23:39:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/83690#M36986</guid>
      <dc:creator>MichaelH2024</dc:creator>
      <dc:date>2024-08-20T23:39:09Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/111096#M43789</link>
      <description>&lt;P&gt;To use Apache Iceberg via the Hadoop Catalog on Databricks, it was found to work with the following settings:&lt;/P&gt;&lt;P&gt;- Use a Databricks Runtime version of 12.2LTS or earlier.&lt;BR /&gt;- Set the access mode to "No isolation shared" (the mode where Unity Catalog cannot be used).&lt;BR /&gt;- Use a library compatible with Java 8 (i.e., an Iceberg library earlier than version 1.6.1).&lt;BR /&gt;- Apply the necessary Iceberg-related settings in the Spark configuration.&lt;/P&gt;&lt;P&gt;There is also an article (in Japanese) that explains how to resolve the errors:&lt;/P&gt;&lt;P&gt;- &lt;A href="https://qiita.com/manabian/items/4c2c78c7db77f704e5ab" target="_blank"&gt;https://qiita.com/manabian/items/4c2c78c7db77f704e5ab&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="iceberg.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/15049iC82CEE6613C3C056/image-size/medium?v=v2&amp;amp;px=400" role="button" title="iceberg.png" alt="iceberg.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 25 Feb 2025 06:27:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/111096#M43789</guid>
      <dc:creator>Manabian</dc:creator>
      <dc:date>2025-02-25T06:27:05Z</dc:date>
    </item>
    <item>
      <title>Re: Reading external Iceberg table</title>
      <link>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/114524#M44855</link>
      <description>&lt;P&gt;Hi, I'm facing the same problem.&lt;/P&gt;&lt;P&gt;However, when set the access mode to "No isolation shared" I loose access to the external location where the Iceberg table resides. Is there a way to&amp;nbsp;force Spark to NOT use catalog even when in the "Standard (formerly Shared) access mode? I've tried setting the following option in the compute configuration:&lt;/P&gt;&lt;PRE&gt;&lt;SPAN&gt;spark.databricks.unityCatalog.enabled false&lt;/SPAN&gt;&lt;/PRE&gt;&lt;P&gt;but that doesn't seem to make any difference, I'm now getting the familiar error:&lt;/P&gt;&lt;PRE&gt;NoSuchTableException: [TABLE_OR_VIEW_NOT_FOUND] The table or view ___ cannot be found. Verify the spelling and correctness of the schema and catalog.&lt;BR /&gt;If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.&lt;/PRE&gt;&lt;P&gt;which is off course correct as the Iceberg table isn't known to the catalog, but the problem is why does it have to be in catalog - can I not just read the Iceberg table data without having to register it in Catalog?&lt;/P&gt;</description>
      <pubDate>Fri, 04 Apr 2025 13:42:01 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/reading-external-iceberg-table/m-p/114524#M44855</guid>
      <dc:creator>Sash</dc:creator>
      <dc:date>2025-04-04T13:42:01Z</dc:date>
    </item>
  </channel>
</rss>

