cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Unable to overwrite table to Azure sql db

Kingston
New Contributor II

Hi 

I have a requirement to read table from azure sql db and update the table in azure databricks with transformations and overwrite updated table to the azure sql db but due to lazy evaluation of pyspark im unable to overwrite the table in azure sql db it just truncating the table not replacing

3 REPLIES 3

Kingston
New Contributor II

Using below code for reference

Kingston
New Contributor II
 

YuliyanBogdanov
New Contributor III

 

Hi @Kingston 

Make sure that you have the proper permissions on the SQL server for the user you do the authentication through JDBC with, i.e. database reader / database writer. Then your approach can go in two directions, push the data from Databricks --> SQL server and trigger the processing there, by running an SP from Databricks side, or the better option would be to pull the data from the SQL server, transform in Databricks and push back to the SQL server, truncating the original table in the process. Below is an example code, let me know if this helps.

from pyspark.sql import SparkSession
from pyspark.sql.functions import *

# Initialize SparkSession
spark = SparkSession.builder \
.appName("PySpark SQL Server Example") \
.getOrCreate()

# SQL Server JDBC URL
url = "jdbc:sqlserver://<server_name>:<port>;databaseName=<database_name>"
properties = {
"user": "<username>",
"password": "<password>",
"driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}

# Assuming `df` is your DataFrame that you want to merge into the SQL Server table

# Load the target table from SQL Server into a DataFrame
tbl1_df = spark.read.jdbc(url=url, table="tbl1", properties=properties)

# Perform the merge operation. This is a simplistic approach.
# You would need to customize this logic based on your actual merge/upsert rules.
merged_df = df.union(tbl1_df).distinct()

# Write the merged DataFrame back to the SQL Server table, overwriting the existing data.
# Note: This will replace the table content. If you need a more nuanced merge strategy,
# you might need to perform the merge logic directly in SQL Server.
merged_df.write.jdbc(url=url, table="tbl1", mode="overwrite", properties=properties)

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group