a week ago - last edited a week ago
Salutations,
I'm using SDP for an ETL that extracts data from HANA and put it in the Unity Catalog. I defined a Policy with the needed driver:
But I get this error:
An error occurred while calling o1013.load. : java.lang.ClassNotFoundException: com.sap.db.jdbc.Driver at java.base/How can I install the driver?
Mahalo
Wednesday
I would recommend the best ways to install the driver depending on your specific environment:
1. Recommended: Use Unity Catalog Volumes
For modern Databricks runtimes (13.3 LTS and above), storing JARs in a Unity Catalog Volume is the standard for secure, governed access.
Step 1: Create a volume and upload the ngdbc.jar file to it (you can do this via Catalog Explorer).
Step 2: When defining your UC Connection, include the path to this JAR in the java_dependencies parameter.
CREATE CONNECTION hana_connection TYPE JDBC
ENVIRONMENT (
java_dependencies '["/Volumes/your_catalog/your_schema/your_volume/ngdbc.jar"]'
)
OPTIONS (
url 'jdbc:sap://<hana_host>:<port>',
user '<user>',
password '<password>'
);
Step 3: Ensure the cluster or job has READ VOLUME permissions on that specific volume.
2. Install as a Cluster-Scoped Library
If you are running this from a standard Job or All-Purpose cluster, you can install the driver directly to the compute resource.
3. Using Databricks Asset Bundles (DABs)
If you are deploying your ETL via bundles, you must explicitly list the library in your databricks.yml task definition.
resources:
jobs:
hana_etl_job:
tasks:
- task_key: extraction_task
libraries:
- maven:
coordinates: "com.sap.cloud.db.jdbc:ngdbc:2.25.9"
Hope this should work!
Thanks!
Wednesday
Thanks @osingh, working on that
Wednesday
Hello,
It is recommended that you upload libraries to source locations that support installation onto compute with standard access mode (formerly shared access mode), as this is the recommended mode for all workloads. Please refer the documentation for best practices. Alternatively, you can also check out the JDBC unity catalog connection to read data from SAP.
Wednesday
I've been reading the SDP Product definition and it offers natively to load data from SAP-HANA,
How can I access to SAP-HANA from the SDP ETL without using the driver?
Thursday
At this time, Databricks does not offer native connectors for SAP HANA. You can find the complete list of managed connectors currently available in Databricks here.
We generally recommend beginning with SAPโs own commercial tools, prioritizing SAP Business Data Cloudโbased solutions. Non-SAP commercial tools should be considered only as secondary options. For more insight into SAP data extraction methods and recent SAP policy updates, see this Databricks Community blog: https://community.databricks.com/t5/technical-blog/navigating-the-sap-data-ocean-demystifying-sap-da....
In addition, SAP Databricks is available within the SAP Business Data Cloud as part of the joint partnership, as detailed in this announcement.