<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic JDBC Oracle Connection change Container Statement in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/jdbc-oracle-connection-change-container-statement/m-p/120426#M46166</link>
    <description>&lt;P&gt;Hey,&lt;/P&gt;&lt;P&gt;Im running into a weird issue while running the following code:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;def getDf(query, preamble_sql=None):
        jdbc_url = f"jdbc:oracle:thin:@//{host}:{port}/{service_name}"
        request = spark.read \
            .format("jdbc") \
            .option("driver", "oracle.jdbc.driver.OracleDriver") \
            .option("url", jdbc_url) \
            .option("query", query) \
            .option("user", username) \
            .option("password", password)
        if preamble_sql is not None:
            request = request.option("sessionInitStatement", preamble_sql)
        df = request.load()
        return df&lt;/LI-CODE&gt;&lt;LI-CODE lang="python"&gt;display(getDf(
    "SELECT sys_context('USERENV','CON_NAME') AS container_name FROM dual",
    "ALTER SESSION SET CONTAINER = CDB$ROOT"
))&lt;/LI-CODE&gt;&lt;P&gt;The query returns "CDB$ROOT" as expected.&lt;/P&gt;&lt;P&gt;I would assume that I can run Statements on the root container now. But when I switch the statement to the required logminer query:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;display(getDf(
    "SELECT * FROM SYS.V_$ARCHIVED_LOG",
    "ALTER SESSION SET CONTAINER = CDB$ROOT"
))&lt;/LI-CODE&gt;&lt;P&gt;Im getting a&amp;nbsp;&lt;SPAN&gt;&lt;EM&gt;java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist&lt;/EM&gt; error.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This query works perfectly fine in DBeaver though.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I can only believe that there is some Spark pre-checks going on that check for the table schema for example - which explains that the first query is working.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;So is there any way to make this work?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Best regards&lt;/P&gt;&lt;P&gt;Samuel&lt;/P&gt;</description>
    <pubDate>Wed, 28 May 2025 13:08:18 GMT</pubDate>
    <dc:creator>Splush</dc:creator>
    <dc:date>2025-05-28T13:08:18Z</dc:date>
    <item>
      <title>JDBC Oracle Connection change Container Statement</title>
      <link>https://community.databricks.com/t5/data-engineering/jdbc-oracle-connection-change-container-statement/m-p/120426#M46166</link>
      <description>&lt;P&gt;Hey,&lt;/P&gt;&lt;P&gt;Im running into a weird issue while running the following code:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;def getDf(query, preamble_sql=None):
        jdbc_url = f"jdbc:oracle:thin:@//{host}:{port}/{service_name}"
        request = spark.read \
            .format("jdbc") \
            .option("driver", "oracle.jdbc.driver.OracleDriver") \
            .option("url", jdbc_url) \
            .option("query", query) \
            .option("user", username) \
            .option("password", password)
        if preamble_sql is not None:
            request = request.option("sessionInitStatement", preamble_sql)
        df = request.load()
        return df&lt;/LI-CODE&gt;&lt;LI-CODE lang="python"&gt;display(getDf(
    "SELECT sys_context('USERENV','CON_NAME') AS container_name FROM dual",
    "ALTER SESSION SET CONTAINER = CDB$ROOT"
))&lt;/LI-CODE&gt;&lt;P&gt;The query returns "CDB$ROOT" as expected.&lt;/P&gt;&lt;P&gt;I would assume that I can run Statements on the root container now. But when I switch the statement to the required logminer query:&lt;/P&gt;&lt;LI-CODE lang="python"&gt;display(getDf(
    "SELECT * FROM SYS.V_$ARCHIVED_LOG",
    "ALTER SESSION SET CONTAINER = CDB$ROOT"
))&lt;/LI-CODE&gt;&lt;P&gt;Im getting a&amp;nbsp;&lt;SPAN&gt;&lt;EM&gt;java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist&lt;/EM&gt; error.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;This query works perfectly fine in DBeaver though.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I can only believe that there is some Spark pre-checks going on that check for the table schema for example - which explains that the first query is working.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;So is there any way to make this work?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Best regards&lt;/P&gt;&lt;P&gt;Samuel&lt;/P&gt;</description>
      <pubDate>Wed, 28 May 2025 13:08:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/jdbc-oracle-connection-change-container-statement/m-p/120426#M46166</guid>
      <dc:creator>Splush</dc:creator>
      <dc:date>2025-05-28T13:08:18Z</dc:date>
    </item>
    <item>
      <title>Re: JDBC Oracle Connection change Container Statement</title>
      <link>https://community.databricks.com/t5/data-engineering/jdbc-oracle-connection-change-container-statement/m-p/120464#M46173</link>
      <description>&lt;P&gt;Here is something to consider:&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;The issue you're experiencing likely stems from differences in behavior when accessing Oracle database objects via Spark JDBC versus other database clients like DBeaver. Specifically, Spark's JDBC interface may perform pre-checks or validation processes on the table schema before executing queries, which could trigger errors such as &lt;CODE&gt;java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist&lt;/CODE&gt;. These checks can occur irrespective of the context or session settings that you apply, such as switching to the root container with &lt;CODE&gt;ALTER SESSION SET CONTAINER = CDB$ROOT&lt;/CODE&gt;.&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;The behavior you're observing, where a straightforward query (&lt;CODE&gt;SELECT sys_context('USERENV','CON_NAME') AS container_name FROM dual&lt;/CODE&gt;) works successfully, but other queries like &lt;CODE&gt;SELECT * FROM SYS.V_$ARCHIVED_LOG&lt;/CODE&gt; fail, aligns with Spark's JDBC interface potentially validating table existence in a manner distinct from DBeaver. The table &lt;CODE&gt;SYS.V_$ARCHIVED_LOG&lt;/CODE&gt; may not be directly accessible due to permissions or session scoping issues when accessing schema objects through Spark&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;Given this, to address your issue:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Ensure that the user credentials provided in the JDBC connection have sufficient permissions to access &lt;CODE&gt;SYS.V_$ARCHIVED_LOG&lt;/CODE&gt; within the specified container.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hope this helps, Lou.&lt;/P&gt;</description>
      <pubDate>Wed, 28 May 2025 18:29:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/jdbc-oracle-connection-change-container-statement/m-p/120464#M46173</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-05-28T18:29:50Z</dc:date>
    </item>
  </channel>
</rss>

