cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to Connect to Oracle from Databricks UC Cluster (DBR 15.4) – ORA-12170 Timeout Error

SatabrataMuduli
New Contributor II

 

Hi all,

I’m trying to connect to an Oracle database from my Databricks UC cluster (DBR 15.4) using the ojdbc8.jar driver, which I’ve installed on the cluster. Here’s the code I’m using:

df = spark.read.format("jdbc")\
    .option("url", jdbc_url)\
    .option("dbtable", dbtable)\
    .option("user", username)\
    .option("password", password)\
    .option("driver", "oracle.jdbc.driver.OracleDriver")\
    .load()
However, I’m getting this error:
SparkConnectGrpcException: (java.sql.SQLTimeoutException) ORA-12170: Cannot connect. TCP connect timeout of 15000ms for host [my_host] port [my_port].
What I’ve tried:
  • Verified the JDBC driver is installed.
  • Double-checked all connection parameters.
  • I can connect to the same Oracle DB from my local machine using the same credentials.

Questions:

  • Has anyone successfully connected to Oracle from a Databricks UC cluster?
  • Are there specific network/firewall settings or configurations needed for this to work?
  • Any tips on troubleshooting this timeout error from the Databricks side?

Any help or suggestions would be greatly appreciated!

2 REPLIES 2

szymon_dybczak
Esteemed Contributor III

Hi @SatabrataMuduli ,

I'm quite sure this is networking issue. You didn't provide much detalis about your environment , so I'll give you general advise. You cannot reach on premise oracle database unless networking is explicitly configured or your database has public ip (which is never the case due to security) .

This is more related to cloud provider configuration than databricks itself. Let's take an Azure as an example.  If you Oracle database is ON-PREM then you need to configure one of the following:

  • Site-to-site VPN

  • Point-to-site VPN
  • ExpressRoute

With above configuration set then you would have to deploy your databricks workspace with VNET Injection setting

Deploy Azure Databricks in your Azure virtual network (VNet injection) - Azure Databricks | Microsof...

Only then you'd be able to connect to your database from databricks cluster. 
The connection from your PC works because probably you're using corporate VPN that has access to the network where your DB resides or you're inside network that has connectivty to it.

Of course another reason why you can't connect (assuming in your environment there's already implemented configuration described above) is that traffic is blocked by Firewall. In hub and spoke architecture there's a common pattern that all outbound traffic is redirected to central firewall. So traffic can be simply blocked by firewall (or NSG - that's also something worth to check)

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @SatabrataMuduli,

The ORA-12170 TCP connect timeout error is a networking issue, not a driver or credentials problem. Your Databricks cluster cannot reach the Oracle host and port within the 15-second TCP timeout window. The fact that it works from your local machine confirms your credentials and Oracle configuration are fine, the gap is network routing between the Databricks compute plane and your Oracle server.

DIAGNOSING THE CONNECTIVITY

First, confirm whether the Oracle host is reachable from your cluster at all. Run this in a notebook cell:

%sh nc -vz <your_oracle_host> <your_oracle_port>

If that times out or fails, the problem is confirmed as network-level, and no amount of JDBC tuning will help.

NETWORK CONFIGURATION

The resolution depends on where your Oracle database is hosted:

1. ON-PREMISES ORACLE
Your Databricks cluster runs in a cloud VPC/VNet that has no route to your corporate network by default. You need to establish private connectivity:

- AWS: Set up VPC peering or AWS Transit Gateway between your Databricks VPC and a VPC that has VPN/Direct Connect back to your on-premises network.
- Azure: Deploy your Databricks workspace with VNet injection, then configure ExpressRoute or a site-to-site VPN to your on-premises network.
- GCP: Use VPC peering with a shared VPC that has Cloud Interconnect or VPN tunnels.

2. ORACLE IN A CLOUD VM (SAME CLOUD PROVIDER)

- Set up VPC/VNet peering between the Databricks network and the network where Oracle runs.
- Ensure security groups, Network Security Groups (NSGs), or firewall rules allow inbound traffic on your Oracle listener port from the Databricks subnet CIDR range.

3. ORACLE WITH A PUBLIC IP

- Ensure the Oracle host's firewall or cloud security group allows inbound connections on the listener port from the Databricks NAT gateway IP addresses.
- Check your cloud provider's documentation for the NAT IPs used by your Databricks deployment.

FIREWALL AND SECURITY GROUP CHECKLIST

Even after routing is in place, firewalls can still block traffic. Verify:

- Oracle listener is bound to the correct network interface (not just localhost/127.0.0.1).
- Cloud security groups allow inbound TCP on your Oracle port from the Databricks compute subnet.
- Any on-premises firewall or network ACL permits the traffic.
- Oracle's sqlnet.ora does not have TCP.VALIDNODE_CHECKING restricting allowed client IPs.

CONSIDER LAKEHOUSE FEDERATION

If you are on DBR 16.1+ (or a Pro/Serverless SQL warehouse version 2024.50+), Databricks supports Oracle through Lakehouse Federation. This lets you create a Unity Catalog connection and query Oracle tables directly through a foreign catalog, without manually managing JDBC drivers:

CREATE CONNECTION oracle_conn TYPE oracle
OPTIONS (
host '<hostname>',
port '<port>',
user secret('scope', 'oracle_user'),
password secret('scope', 'oracle_password')
);

CREATE FOREIGN CATALOG oracle_catalog
USING CONNECTION oracle_conn
OPTIONS (database '<oracle_service_or_sid>');

This approach handles driver management for you and integrates with Unity Catalog governance. Note that for non-TLS connections, you need Oracle server-side Native Network Encryption (NNE) enabled at the ACCEPTED level or higher.

Documentation: https://docs.databricks.com/en/query-federation/oracle.html

INCREASING THE TCP TIMEOUT (WORKAROUND, NOT A FIX)

If your network path is valid but slow (e.g., cross-region), you can increase the JDBC connection timeout as a temporary measure. Add this to your JDBC URL:

jdbc:oracle:thin:@(DESCRIPTION=(CONNECT_TIMEOUT=60)(ADDRESS=(PROTOCOL=TCP)(HOST=your_host)(PORT=your_port))(CONNECT_DATA=(SERVICE_NAME=your_service)))

This bumps the TCP timeout to 60 seconds. However, if the route does not exist, increasing the timeout will just delay the same failure.

SUMMARY

The root cause is almost certainly that no network path exists between your Databricks cluster and the Oracle host. Work with your network/infrastructure team to establish VPC peering, VPN, or equivalent connectivity, then verify with the nc command above before retrying your JDBC read.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.