cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Cluster Issue

NishantTiwari
New Contributor II

Driver: c5.4xlarge · Workers: c5.4xlarge · 8 workers · On-demand and Spot · fall back to On-demand · DBR: 7.3 LTS (includes Apache Spark 3.0.1, Scala 2.12) · us-east-1c
In my databricks job there is a step NDS download which we used to download files from 3rd party portal. And the cluster which support this step is getting deprecated now. (Cluster details are mentioned above).
I have tried every possible solution to resolve this. But still at square one and mainly 2 types of error are coming up - 
1. (SSLError(398, '[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:3900)')))
2.  SSLError(SSLError(399, '[SSL: EE_KEY_TOO_SMALL] ee key too small (_ssl.c:3900)')))

Looking for solution, please help!


5 REPLIES 5

saurabh18cs
Honored Contributor III

Hi @NishantTiwari SSL handhsake is failing when try to download files from 3rd party portal because you are using dbr 7.3 which is deprecated and also supports weaker TLS defaults, you cannot either upgrade TLS settings inside DBR 7.3

upgrade your cluster runtime to 13.3 LTS or +

MoJaMa
Databricks Employee
Databricks Employee

Nishant, Please note that towards the end of Feb 7.3 will stop working as it's reaching EoL not just EoS. It's been EoS since 2023. We've been emailing customers for several months now about EoL. 

Please review this table carefully. You have to move to at least 10.4 LTS.

https://docs.databricks.com/aws/en/archive/runtime-release-notes/#end-of-support-history

NishantTiwari
New Contributor II

Thanks! @saurabh18cs & @MoJaMa I know its EoL. Now I'm looking for solution. Mojama can you please me to add init script in cluster workspace (source) that gonna work. Also I'm tryin any runtime version 14.3 or + facing the same issue.

 

saurabh18cs
Honored Contributor III

Hi @NishantTiwari then what i see is problem not client side but server side . they need to updte their TLS certificate. Ask them to use a modern TLS 1.2 or higher

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @NishantTiwari,

I see you have already upgraded to DBR 14.3+ but are still hitting the same SSL errors. That makes sense, and here is why: the two errors you are seeing point to the 3rd party server using weak or outdated SSL certificates, not an issue on your Databricks cluster itself.

CA_MD_TOO_WEAK - The server's CA certificate uses a weak message digest (e.g., MD5 or short SHA-1)
EE_KEY_TOO_SMALL - The server's end-entity certificate uses a key that is too short (e.g., 1024-bit RSA)

Newer Databricks Runtimes ship with updated versions of OpenSSL that enforce stricter security defaults. So upgrading the runtime actually makes OpenSSL MORE strict, which is why you continue to see these errors even on 14.3+.

THE RECOMMENDED LONG-TERM FIX

The best solution is to ask the 3rd party portal to update their TLS certificates to use modern standards (at minimum TLS 1.2 with 2048-bit RSA keys and SHA-256 or stronger). This is the correct fix because their certificates do not meet current security standards.

A WORKAROUND USING AN INIT SCRIPT

If you cannot get the 3rd party to update their certificates right away, you can temporarily lower the OpenSSL security level on your cluster using an init script. This allows the connection to succeed while the 3rd party works on upgrading their certificates.

Step 1: Create the init script file. In your Databricks workspace, create a new file (for example at /Workspace/Users/your-email/init-scripts/lower-ssl-security.sh) with this content:

#!/bin/bash
# Temporarily lower OpenSSL security level to allow weak certificates
# from legacy third-party servers.

OPENSSL_CONF_FILE="/etc/ssl/openssl_custom.cnf"

cat > "$OPENSSL_CONF_FILE" << 'EOF'
openssl_conf = default_conf

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
CipherString = DEFAULT:@SECLEVEL=1
EOF

echo "export OPENSSL_CONF=$OPENSSL_CONF_FILE" >> /etc/environment
echo "export OPENSSL_CONF=$OPENSSL_CONF_FILE" >> /databricks/spark/conf/spark-env.sh

Step 2: Attach the init script to your cluster.
1. Go to your cluster configuration page
2. Enable the Advanced toggle
3. Click the Init Scripts tab
4. Select "Workspace" as the source
5. Enter the path to your script (e.g., /Workspace/Users/your-email/init-scripts/lower-ssl-security.sh)
6. Click Add, then restart the cluster

AN ALTERNATIVE PYTHON-LEVEL WORKAROUND

If you only need this for a specific notebook or job step rather than the whole cluster, you can also set the environment variable directly in your Python code before making the HTTPS call:

import os
import ssl
import urllib3

# Lower the security level for this session only
os.environ['OPENSSL_CONF'] = '/dev/null'

# Create a custom SSL context with lower security
ctx = ssl.create_default_context()
ctx.set_ciphers('DEFAULT:@SECLEVEL=1')

# If using requests library:
import requests
from requests.adapters import HTTPAdapter

class SSLAdapter(HTTPAdapter):
def init_poolmanager(self, *args, kwargs):
ctx = ssl.create_default_context()
ctx.set_ciphers('DEFAULT:@SECLEVEL=1')
kwargs['ssl_context'] = ctx
return super().init_poolmanager(*args, kwargs)

session = requests.Session()
session.mount('https://', SSLAdapter())
response = session.get('https://your-third-party-url.com/download')

IMPORTANT NOTES

- Lowering the SSL security level does reduce the security of those connections, so only apply this to the specific cluster or session that needs it.
- This should be treated as a temporary measure while the 3rd party upgrades their certificates.
- Make sure you are running DBR 13.3 LTS or later since DBR 7.3 has reached end-of-life as of February 2026 and will no longer launch clusters.

For reference on configuring init scripts:
https://docs.databricks.com/aws/en/init-scripts/cluster-scoped.html

For the Databricks Runtime support schedule:
https://docs.databricks.com/aws/en/archive/runtime-release-notes/

Hope this helps get your NDS download step working again!

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.