- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-03-2024 09:09 AM
Hi Everyone.
I am trying to connect and read data from the Databricks table using SQL Warehouse and return it using Azure API.
However, the non-English characters, for example, 'Ä', are present in the response as following: ��.
I am using the databricks-jdbc driver of the latest version.
I have tried to resolve it by setting the System properties as:
System.setProperty("file.encoding", "UTF-8");
System.setProperty("sun.jnu.encoding", "UTF-8");
Another thing that I tried was changing the connection string to contain:
useUnicode=true;characterEncoding=UTF-8
However, this causes the exception:
Internal Server Error: [Databricks][DatabricksJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.hive.service.cli.HiveSQLException:Configuration useUnicode is not available
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-08-2024 04:21 AM
Hi @Retired_mod
I was able to resolve the issue by changing the approach of setting system properties from the code itself at the start of the execution to propagating them to the Azure Function environment variables in JAVA_OPTS. This way the JVM is instantiated already with the proper configuration.
Thanks a lot
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-08-2024 04:21 AM
Hi @Retired_mod
I was able to resolve the issue by changing the approach of setting system properties from the code itself at the start of the execution to propagating them to the Azure Function environment variables in JAVA_OPTS. This way the JVM is instantiated already with the proper configuration.
Thanks a lot
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-09-2024 04:32 PM
If Databricks support/Product managers follow the forum, suggest you review the SIMBA provided docs.
It does not discuss the name value pairs mentioned re utf and encoding.
https://www.databricks.com/spark/jdbc-drivers-download
There are other gaps in the SIMBA docs re name-value pairs including PreparedMetadataLimitZero
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-09-2024 04:43 PM
If Databricks support/Product Management following the forum, note that PDF from SIMBA in 2.6.28 does not discuss the name-value pairs in the above solution.
Other errata includes PreparedMetadataLimitZero.

