<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Databricks JDBC Error while connecting from Datastage JDBC connector in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117454#M45502</link>
    <description>&lt;P&gt;Thank you so much for the quick help. setting auto commit to true resolved the issue. I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but&amp;nbsp;does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue.&lt;/P&gt;</description>
    <pubDate>Thu, 01 May 2025 23:29:35 GMT</pubDate>
    <dc:creator>Fuzail</dc:creator>
    <dc:date>2025-05-01T23:29:35Z</dc:date>
    <item>
      <title>Databricks JDBC Error while connecting from Datastage JDBC connector</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117437#M45498</link>
      <description>&lt;P&gt;I am reading data from Databricks in datatstage 11.7 on prem using datastage JDBC connector and getting the below error. I tried to limit the select queries to one row , it was able to read data form the source,&amp;nbsp;&lt;/P&gt;&lt;P&gt;JDBC_Connector_0: The connector encountered a Java exception:&amp;nbsp;&lt;BR /&gt;java.sql.SQLException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: select cola,colb&amp;nbsp; from table1, Error message from Server: Configuration AutoCommit is not available..&lt;/P&gt;&lt;P&gt;I am using the latest JDBC driver available on databricks and done the required configuration. any assistance on this would be of great help&lt;/P&gt;</description>
      <pubDate>Thu, 01 May 2025 18:15:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117437#M45498</guid>
      <dc:creator>Fuzail</dc:creator>
      <dc:date>2025-05-01T18:15:53Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC Error while connecting from Datastage JDBC connector</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117442#M45500</link>
      <description>&lt;P&gt;Greetings Fuzail,&amp;nbsp; here are some suggestions you might want to consider:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="paragraph"&gt;The error you're encountering, "Configuration AutoCommit is not available," when using the Databricks JDBC connector in DataStage 11.7 suggests a misalignment with the auto-commit settings.&lt;/DIV&gt;
&lt;H3&gt;Analysis and Recommendations:&lt;/H3&gt;
&lt;OL start="1"&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Auto-Commit Behavior in Databricks JDBC Driver&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;The Databricks JDBC driver operates with auto-commit mode enabled by default. This means that manual commit operations are generally not supported, as indicated by related error messages like "Cannot use commit while connection is in auto-commit mode" from similar issues.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Resolution Steps&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;Ensure that the DataStage JDBC properties explicitly set the &lt;CODE&gt;AutoCommit&lt;/CODE&gt; parameter to &lt;CODE&gt;true&lt;/CODE&gt; in its connection settings to align with the Databricks JDBC driver's behavior. This adjustment should prevent the connector from attempting manual commits, which are not supported.&lt;/LI&gt;
&lt;LI&gt;Refer to the DataStage and Databricks configuration documentation to locate where these connection properties can be explicitly defined, such as in the configuration wizard or JDBC connection string.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Additional Debugging&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;If the error persists, verify that the latest Databricks JDBC driver version is being used, as updates might include fixes for such compatibility issues. The latest versions are recommended for addressing known problems.&lt;/LI&gt;
&lt;LI&gt;Cross-check the logs to identify whether any test queries fired by DataStage during connection initiation might conflict with the driver's auto-commit behavior.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Documentation&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;See the Databricks JDBC Driver Installation and Configuration Guide for further insights on how the driver handles transaction-related operations and auto-commit settings.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="paragraph"&gt;Making these adjustments should help resolve the auto-commit configuration error.&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Cheers, Big Roux.&lt;/DIV&gt;</description>
      <pubDate>Thu, 01 May 2025 18:49:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117442#M45500</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-05-01T18:49:55Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC Error while connecting from Datastage JDBC connector</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117454#M45502</link>
      <description>&lt;P&gt;Thank you so much for the quick help. setting auto commit to true resolved the issue. I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but&amp;nbsp;does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue.&lt;/P&gt;</description>
      <pubDate>Thu, 01 May 2025 23:29:35 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117454#M45502</guid>
      <dc:creator>Fuzail</dc:creator>
      <dc:date>2025-05-01T23:29:35Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC Error while connecting from Datastage JDBC connector</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117538#M45520</link>
      <description>&lt;P&gt;&lt;SPAN&gt;I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but&amp;nbsp;does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp;, can you provide your suggestion for this.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 02 May 2025 17:12:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117538#M45520</guid>
      <dc:creator>Fuzail</dc:creator>
      <dc:date>2025-05-02T17:12:24Z</dc:date>
    </item>
    <item>
      <title>Re: Databricks JDBC Error while connecting from Datastage JDBC connector</title>
      <link>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117541#M45522</link>
      <description>&lt;P&gt;Here are some suggestions, not sure if it fits with what you are doing but they are worth mentioning.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="paragraph"&gt;The Databricks JDBC driver currently does not support batch updates, which is why your updates appear to process row by row with a batch size of 1.&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="paragraph"&gt;Here are the details and possible workarounds:&lt;/DIV&gt;
&lt;OL start="1"&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Driver Limitation&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;The Databricks JDBC driver enforces a batch size of 1 for updates because it does not currently support batch operations in auto-commit mode.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="paragraph"&gt;&lt;STRONG&gt;Workarounds&lt;/STRONG&gt;:
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Use &lt;CODE&gt;COPY INTO&lt;/CODE&gt;&lt;/STRONG&gt;: Databricks supports the &lt;CODE&gt;COPY INTO&lt;/CODE&gt; command, which can handle bulk data ingestion efficiently. This approach sidesteps the limitations of JDBC for batch updates.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Batch Inserts Using Spark SQL&lt;/STRONG&gt;: You can implement a workaround by inserting multiple rows in a single SQL statement via Spark SQL's &lt;CODE&gt;VALUES&lt;/CODE&gt; clause. For instance, you can construct an &lt;CODE&gt;INSERT INTO&lt;/CODE&gt; statement that batches hundreds of rows within a single operation. Note that you may need additional logic to handle splitting large jobs into manageable chunks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Programmatic Ingestion&lt;/STRONG&gt;: If &lt;CODE&gt;COPY INTO&lt;/CODE&gt; or Spark SQL is not feasible, consider using Databricks' supported ingestion methods like DataFrames or Delta Lake APIs for optimized data writes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;DIV class="paragraph"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Fri, 02 May 2025 17:33:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/databricks-jdbc-error-while-connecting-from-datastage-jdbc/m-p/117541#M45522</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-05-02T17:33:20Z</dc:date>
    </item>
  </channel>
</rss>

