Hi @Kaitsu ,
The documentation mentions, If you specify a property that is not supported by the connector, then the connector attempts to apply the property as a Spark server-side property for the client session.
Unlike many other JDBC drivers that ignore unknown properties, the Databricks JDBC driver (based on the open-source Spark JDBC driver and older Simba-based drivers) attempts to pass them to the Spark server as session configuration properties, which can cause the server to fail the connection if it doesn't support the configuration.
Unfortunately, there is no direct, universal configuration property on either the Databricks JDBC driver or the Databricks server-side to instruct them to silently ignore all unsupported client session properties.
Is it possible for you to use a cluster or SQL warehouse which uses default or safe value to prevent the client's session-level override from causing a failure. Or override the "java.util.Properties" with the right set of properties.
// ...
String url = "jdbc:databricks://<server-hostname>:443";
Properties p = new java.util.Properties();
p.put("httpPath", "<http-path>");
p.put("<setting1>", "<value1");
p.put("<setting2>", "<value2");
p.put("<settingN>", "<valueN");
// ...
Connection conn = DriverManager.getConnection(url, p);
// ...
The error thrown is a handled error, and must give the indication on wrong parameters being passed and should be checked by the client.
Thanks!