cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

JDCB Error trying a get schemas call.

JeffSeaman
New Contributor II

Hi Community,

I have a free demo version and can create a jdbc connection and get metadata (schema, table, and columns structure info). 

Everything works as described in the docs, but when working with someone who has a paid version of databricks the same code isn't working. Their admin can see the jdbc connection in the logs and it's a successful connection.

This is the error trace:

[Databricks][JDBCDriver](500594) Error calling GetSchemas API call. Error code from server: 0. Error message from server: TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.hive.service.cli.HiveSQLException:Error
operating GET_SCHEMAS The TCP/IP connection to the host 10.202.34.40, port 1433 has failed. Error: "Connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".:168:167, org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$:hiveOperatingError:HiveThriftServerErrors.scala:67, org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$:hiveOperatingError:HiveThriftServerErrors.scala:61, org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation$$anonfun$onError$1:applyOrElse:SparkAsyncOperation.scala:210, org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation$$anonfun$onError$1...

To me that looks like databricks can't access its own hive store. I know this other person uses the unity store though. Maybe the hive message isn't related?

I've tried every jdbc url parameter I could find using every combination of options I could think of.

Anyone know what's going on?

Thanks!

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

BigRoux
Databricks Employee
Databricks Employee

Hey @JeffSeaman , I did some digging in our internal docs and a found a few things for you to consider/research:

The most likely root cause has something to do with SQL Server Connectivity issues.

The central error describes a TCP timeout to 10.202.34.40:1433, which is the classic port for SQL Server:

  • This strongly indicates that the Databricks compute resource (cluster or SQL warehouse) cannot reach the desired SQL Server endpoint—potentially due to network, routing, firewall, or SQL Server configuration issues.
  • This is not an indicator of a Databricks metastore (Hive or Unity Catalog) outage or problem, but rather an inability of Databricks to initiate a connection to the remote SQL Server.

This aligns precisely with general troubleshooting guidance for JDBC connection timeouts, including:

  • Verifying that the remote SQL Server is running and listening at the IP/port specified.
  • Confirming firewall rules (on the Databricks side, the SQL Server VM(s), or any intermediate network device) are not blocking TCP/1433.
  • Ensuring NAT, routing, or private link rules are configured for connectivity if using private endpoints/VNET architectures.
  • Making sure JDBC URL parameters are compatible, especially in environments with custom SSL/certificates or required authentication mechanisms.

Are you trying to connect from Free Edition to SQL Server or SQL Server to Free Edition?

Let me know, Louis.

View solution in original post

8 REPLIES 8

ManojkMohan
Honored Contributor

Root Cause of Error

Different Metastores

  • Your free demo workspace uses the Hive Metastore by default.
  • The paid workspace uses Unity Catalog.

Corrective Solution

Connect via Databricks SQL Warehouse

Use the JDBC endpoint from a SQL Warehouse (not the legacy Hive thrift server).

Example URL:

jdbc:databricks://<workspace-host>:443/default;transportMode=http;ssl=1;httpPath=/sql/1.0/warehouses/<warehouse-id>;AuthMech=3;UID=token;PWD=<personal-access-token>


Use the Official Databricks JDBC Driver

Download from Databricks docs.

Don’t use generic Hive/Spark JDBC drivers.

Verify Unity Catalog Permissions

The user running JDBC must have:

GRANT USE CATALOG ON CATALOG <catalog> TO `<user>`;
GRANT USE SCHEMA ON SCHEMA <catalog>.<schema> TO `<user>`;

Without these, getSchemas() will fail even on the right endpoint.

Test with Basic Queries

Run:

SHOW CATALOGS;
SHOW SCHEMAS IN <catalog>;
SHOW TABLES IN <catalog>.<schema>;


If these succeed, JDBC + Unity Catalog is configured correctly.

Let me know if the above works, and if you find this useful pls mark as an accepted solution

Thank you for the quick reply, but I've done all the mentioned steps, including the grants. I got the url base from the SQL Warehouses menu -> Connection details. We are using token auth. (AuthMech=3) ssl is on and transport mode is http.

Next Steps
Confirm and switch your compute (SQL Warehouse) 

Upgrade the Databricks SQL Warehouse to at least Databricks Runtime 13.3 LTS

Restart the SQL Warehouse after any major configuration changes, especially after attaching to Unity Catalog.

Test with basic queries after these changes:

SHOW CATALOGS;
SHOW CATALOGS;

SHOW SCHEMAS IN <catalog>;
SHOW SCHEMAS IN <catalog>;

SHOW TABLES IN <catalog>.<schema>;

If these steps do not resolve the issue, detailed driver logs and checking edge cases with different user accounts

Tried these as well:

 
Same errors without specifying the catalog, which I was thinking may have forced a hive check instead of a unity check.

the posting rules are axing out the whole strings, so here's a short view of the parameters at the end:

Tried these as well:

 

BigRoux
Databricks Employee
Databricks Employee

Hey @JeffSeaman , I did some digging in our internal docs and a found a few things for you to consider/research:

The most likely root cause has something to do with SQL Server Connectivity issues.

The central error describes a TCP timeout to 10.202.34.40:1433, which is the classic port for SQL Server:

  • This strongly indicates that the Databricks compute resource (cluster or SQL warehouse) cannot reach the desired SQL Server endpoint—potentially due to network, routing, firewall, or SQL Server configuration issues.
  • This is not an indicator of a Databricks metastore (Hive or Unity Catalog) outage or problem, but rather an inability of Databricks to initiate a connection to the remote SQL Server.

This aligns precisely with general troubleshooting guidance for JDBC connection timeouts, including:

  • Verifying that the remote SQL Server is running and listening at the IP/port specified.
  • Confirming firewall rules (on the Databricks side, the SQL Server VM(s), or any intermediate network device) are not blocking TCP/1433.
  • Ensuring NAT, routing, or private link rules are configured for connectivity if using private endpoints/VNET architectures.
  • Making sure JDBC URL parameters are compatible, especially in environments with custom SSL/certificates or required authentication mechanisms.

Are you trying to connect from Free Edition to SQL Server or SQL Server to Free Edition?

Let me know, Louis.

JeffSeaman
New Contributor II

The problem was an offline custom endpoint. It wasn't part of our metadata scan as we were specifying a specific catalog and schema, but if any part of your instance is down, the jdbc driver will fail the metadata call. The solution was to create a service account that wasn't assigned to that endpoint. I know I'm getting some of the terminology mixed up, but that's the general idea. 

BigRoux
Databricks Employee
Databricks Employee

@JeffSeaman , please let us know if any of my suggestions help get you on the right track. If they do, kindly mark the post as "Accepted Solution" so others can benefit as well.

Cheers, Louis.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now