cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Not able to connect to Neo4j Aura Db from databricks

saab123
New Contributor II

I am trying to connect to a Neo4j AuraDb instance-f9374927.

 

  1. Created a free professional instance of Neo4j. I am able to connect to this instance, add nodes and relationships.

 

 

 

  1. Created a Databricks shared cluster 14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12). Added maven coordinates org.neo4j:neo4j-connector-apache-spark_2.12:5.3.6_for_spark_3 and ran the code below.

 

  1. I am getting the can’t connect error. After looking at this support question- https://community.databricks.com/t5/data-engineering/how-to-connect-neo4j-aura-to-a-cluster/td-p/156..., I added the following init script to my server configuration. But the server won’t start after adding the init script.

#!/bin/bash
echo "Configuring Databricks to connect to Neo4j"
# Ensure Java security settings are correctly configured (if needed for Neo4j Aura).
PROPERTIES_FILE="/databricks/spark/dbconf/java/extra.security"
DISABLED_ALGOS="SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL"if [[ -f "${PROPERTIES_FILE}" ]]; then
  echo "Setting jdk.tls.disabledAlgorithms..."
  echo "jdk.tls.disabledAlgorithms=${DISABLED_ALGOS}" | tee -a "${PROPERTIES_FILE}"
else
  echo "ERROR: Failed to find ${PROPERTIES_FILE}"
  exit 1
fi

 

 

 

Here are the console and driver logs. Please help resolve this issue.

Console Logs:

 

Py4JJavaError: An error occurred while calling o571.save. : org.neo4j.driver.exceptions.ServiceUnavailableException: Could not perform discovery for database '<default database>'. No routing server available.

 

Driver Logs:

25/04/07 18:22:44 WARN DynamicSparkConfContextImpl: Ignored update because id 1744046542240 < 1744046542240; source: CONFIG_FILE

25/04/07 18:22:44 INFO DatabricksILoop$: Received SAFEr configs with version 1744046542240

25/04/07 18:23:05 WARN RediscoveryImpl: Received a recoverable discovery error with server 'f9374927.databases.neo4j.io(54.146.202.10):7687', will continue discovery with other routing servers if available. Complete failure is reported separately from this entry.

25/04/07 18:23:36 WARN RediscoveryImpl: Received a recoverable discovery error with server 'f9374927.databases.neo4j.io(54.160.21.207):7687', will continue discovery with other routing servers if available. Complete failure is reported separately from this entry.

25/04/07 18:23:44 WARN DynamicSparkConfContextImpl: Ignored update because id 1744046542240 < 1744046542240; source: CONFIG_FILE

25/04/07 18:23:44 INFO DatabricksILoop$: Received SAFEr configs with version 1744046542240

25/04/07 18:23:48 WARN JupyterKernelListener: Received Jupyter debug message with unknown command: null

25/04/07 18:24:06 WARN RediscoveryImpl: Received a recoverable discovery error with server 'f9374927.databases.neo4j.io(23.23.29.38):7687', will continue discovery with other routing servers if available. Complete failure is reported separately from this entry.

25/04/07 18:24:06 INFO ProgressReporter$: Removed result fetcher for 1744047307476_5211541610516211935_0475a19e17354317b2232a91e26be872

25/04/07 18:24:06 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://databricks-mda-workspace-cloud-storage-bucket.s3.us-west-2.amazonaws.com/oregon-prod/1637039...] Presigned URL: Started uploading stream using AwsPresignedUrl

1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

The connection issue between your Databricks cluster and Neo4j AuraDB instance (f9374927) with the ServiceUnavailableException: No routing server available message is tied to network-level SSL configuration and connectivity rather than incorrect code or driver usage.​

Key Cause

Databricks clusters cannot natively establish a secure Bolt connection (neo4j+s://) to Neo4j Aura due to Java TLS restrictions and firewall egress settings. AuraDB only allows secure encrypted traffic, and any Java TLS misconfiguration in Databricks will cause the “No routing servers available” discovery errors shown in your logs. The cluster init script you added fails because it attempts to modify a system-managed Java configuration file that Databricks restricts.

Recommended Fix

  1. Remove the init script
    Delete the custom init script that modifies /databricks/spark/dbconf/java/extra.security. It prevents your cluster from starting because that path is read-only in shared runtime environments.​

  2. Use the encrypted Neo4j URI
    Always use the neo4j+s:// protocol for Aura (secure BOLT with verified certificate):

    text
    spark.read \ .format("org.neo4j.spark.DataSource") \ .option("url", "neo4j+s://f9374927.databases.neo4j.io") \ .option("authentication.basic.username", "<your_username>") \ .option("authentication.basic.password", "<your_password>") \ .load()

    If your environment lacks the root CA bundle to validate certificates, use:

    text
    neo4j+ssc://f9374927.databases.neo4j.io

    This skips strict certificate validation while keeping TLS active.​

  3. Validate egress network access
    AuraDB runs on specific IPs. Ensure your Databricks workspace allows outbound traffic on port 7687 (Bolt) and 443 (HTTPS) to *.databases.neo4j.io. Check cluster logs or contact Databricks support to whitelist Aura endpoints if necessary.​

  4. Set Spark configs explicitly
    Before using the connector, add:

    python
    spark.conf.set("spark.neo4j.bolt.url", "neo4j+s://f9374927.databases.neo4j.io") spark.conf.set("spark.neo4j.bolt.user", "<username>") spark.conf.set("spark.neo4j.bolt.password", "<password>") spark.conf.set("spark.neo4j.connection.max.retry.time", "60s") spark.conf.set("spark.neo4j.connection.timeout", "30s")

Summary

The root problem comes from Databricks SSL environment limitations and not Neo4j itself. Removing the init script, switching to neo4j+s (or neo4j+ssc if needed), and verifying egress access typically resolves this exact ServiceUnavailable discovery failure. If the problem persists, confirm via:

text
openssl s_client -connect f9374927.databases.neo4j.io:7687

within Databricks (via %sh cell) to ensure TLS negotiation succeeds.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now