@Retired_mod ,
The links which you pasted seems doesn't works.
Anyways, please find my response for your suggestions.
1. Check if the session handle is valid and active. If the session has expired, a new session needs to be created.
I am using commons connection pooling which creates a singleton datasource object with already created connections in advance.
Please note that initially everything works fine and I am able to get the data from databricks.
The issues arises after I don't do any activity after initial fetch for some 20 mins (with server running) and then again try to fetch the data from databricks via same datasource(pooled connection) which got created initially.
I was expecting the databricks JDBC connector jar should handle this error and create a new connection if the existing connection got stale.
2. Check if there are any issues with the session manager. This can be done by checking the logs of the session manager and verifying if it is running correctly.
I get's the same logs from databricks log4j logs (Driver logs tab of compute cluster section)
3. Verify if the JDBC driver is compatible with the Databricks version. If not, update the driver to a compatible version.
I am using 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12) and since I am using maven databricks jdbc latest version (<version>2.6.33</version>), It should include support for 12.2 LTS version.
4. Check if there are any network connectivity issues between the client and the Databricks cluster. This can be done by running a ping test or using a network monitoring tool.
There are no network issues as I initially mentioned, the initial fetch works fine and later after some idle time, databricks throws invalid session handle error.
Also, I would like to mention if I open JDBC connection and close it for every requests, it works fine irrespective of idle time but I want's to leverage the benefit of connection pooling (already created connections) instead of creating and closing the connections for every request.
I think the error needs to be handled inside databricks connector jar.