โ05-01-2025 11:15 AM
I am reading data from Databricks in datatstage 11.7 on prem using datastage JDBC connector and getting the below error. I tried to limit the select queries to one row , it was able to read data form the source,
JDBC_Connector_0: The connector encountered a Java exception:
java.sql.SQLException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: select cola,colb from table1, Error message from Server: Configuration AutoCommit is not available..
I am using the latest JDBC driver available on databricks and done the required configuration. any assistance on this would be of great help
โ05-01-2025 11:49 AM
Greetings Fuzail, here are some suggestions you might want to consider:
AutoCommit
parameter to true
in its connection settings to align with the Databricks JDBC driver's behavior. This adjustment should prevent the connector from attempting manual commits, which are not supported.โ05-01-2025 04:29 PM
Thank you so much for the quick help. setting auto commit to true resolved the issue. I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue.
โ05-01-2025 11:49 AM
Greetings Fuzail, here are some suggestions you might want to consider:
AutoCommit
parameter to true
in its connection settings to align with the Databricks JDBC driver's behavior. This adjustment should prevent the connector from attempting manual commits, which are not supported.โ05-01-2025 04:29 PM
Thank you so much for the quick help. setting auto commit to true resolved the issue. I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue.
โ05-02-2025 10:12 AM
I just have one more followup question, The update to databricks using JDBC is taking very longer time and looks like its processing row by row, I tried to adjust the setting of the connector but does not help. From the datastage log i can see "The driver does not support batch updates. The connector will enforce the batch size value of 1." Is there any possible workaround for this issue @BigRoux , can you provide your suggestion for this.
โ05-02-2025 10:33 AM
Here are some suggestions, not sure if it fits with what you are doing but they are worth mentioning.
COPY INTO
: Databricks supports the COPY INTO
command, which can handle bulk data ingestion efficiently. This approach sidesteps the limitations of JDBC for batch updates.VALUES
clause. For instance, you can construct an INSERT INTO
statement that batches hundreds of rows within a single operation. Note that you may need additional logic to handle splitting large jobs into manageable chunks.COPY INTO
or Spark SQL is not feasible, consider using Databricks' supported ingestion methods like DataFrames or Delta Lake APIs for optimized data writes.Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now