โ04-24-2025 01:53 PM
Hello, I'm having this issue with job-computes:
The snippet of the code is as follows:
84 if self.conf["persist_to_sql"]:
85 # persist to sql
86 df_parsed.write.format(
87 "com.microsoft.sqlserver.jdbc.spark"
88 ).option("url", self.get_sql_connection_string()).option(
89 "dbtable", self.conf["sql_tables"]["report"]
90 ).option(
91 "driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver"
92 ).mode(
93 "append"
---> 94 ).save()
95 self.logger.info(
96 f"Saved to {self.conf['sql_tables']['report']}"
97 )
How can I install the driver to this job-compute?
This is the driver I'm want to install: spark-mssql-connector_2.12-1.4.0-BETA.jar
I tried:
Thanks
โ04-28-2025 03:31 PM
Up!
โ04-28-2025 08:07 PM
To install the spark-mssql-connector_2.12-1.4.0-BETA.jar driver for Azure Databricks job clusters, use one of these methods:
Add these Maven coordinates to your cluster/library configuration:
Steps:
Navigate to your Databricks cluster settings.
Under Libraries, select Install New โ Maven.
Paste the coordinates above and install
For job clusters, specify libraries directly in the job settings:
This ensures dependencies are included when the job cluster starts
If using an init script, ensure both the connector and JDBC driver are copied:
Upload the JARs to DBFS first
Spark Version Compatibility: Confirm your Databricks runtime aligns with the connectorโs Spark 3.4 requirement
JDBC Dependency: The mssql-jdbc driver is mandatory to resolve NoClassDefFoundError
Avoid Manual DBFS Uploads: Maven installation is preferred for dependency management
For Azure Data Factory pipelines, configure the libraries in the Databricks linked serviceโs Advanced settings under "Libraries"
โ04-30-2025 11:49 PM - edited โ04-30-2025 11:52 PM
For a job compute, you would have to go init script route.
Can you please highlight, the cause of the failure of library installation via init script?
โ05-05-2025 06:56 AM - edited โ05-05-2025 06:58 AM
Hello @NandiniN ,
In the Shared folder I created a init_script for the JobCompute defined as its shown:
#!/bin/bash
echo "=====> Installing MSSQL Spark Connector"
cp /dbfs/Workspace/Shared/drivers/spark-mssql-connector_2.12-1.4.0-BETA.jar /databricks/jars/
echo "=====> Connector copied!"
In the logs there isn't any log and the workflow is still failing due to the missing driver, if you could give me any direction for trying to solve this would be much appreciated
โ05-06-2025 02:32 PM
Up!
โ05-20-2025 07:53 AM
@NandiniNplease check
โ05-27-2025 01:06 PM
@NandiniNplease check
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now