Hi @Kingston
Make sure that you have the proper permissions on the SQL server for the user you do the authentication through JDBC with, i.e. database reader / database writer. Then your approach can go in two directions, push the data from Databricks --> SQL server and trigger the processing there, by running an SP from Databricks side, or the better option would be to pull the data from the SQL server, transform in Databricks and push back to the SQL server, truncating the original table in the process. Below is an example code, let me know if this helps.
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
# Initialize SparkSession
spark = SparkSession.builder \
.appName("PySpark SQL Server Example") \
.getOrCreate()
# SQL Server JDBC URL
url = "jdbc:sqlserver://<server_name>:<port>;databaseName=<database_name>"
properties = {
"user": "<username>",
"password": "<password>",
"driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
# Assuming `df` is your DataFrame that you want to merge into the SQL Server table
# Load the target table from SQL Server into a DataFrame
tbl1_df = spark.read.jdbc(url=url, table="tbl1", properties=properties)
# Perform the merge operation. This is a simplistic approach.
# You would need to customize this logic based on your actual merge/upsert rules.
merged_df = df.union(tbl1_df).distinct()
# Write the merged DataFrame back to the SQL Server table, overwriting the existing data.
# Note: This will replace the table content. If you need a more nuanced merge strategy,
# you might need to perform the merge logic directly in SQL Server.
merged_df.write.jdbc(url=url, table="tbl1", mode="overwrite", properties=properties)