I have a pyspark dataframe that I'm writing to an on-prem MSSQL server--it's a stopgap while we convert data warehousing jobs over to databricks.
The processes that use those tables in the on-prem server rely on the tables maintaining the identical structure. I have a structure, for instance, like below.
SOURCE VARCHAR(3)
LOCATION_NO VARCHAR(3)
SKU_NO LONG
CASE_SKU_NO LONG
ITEM_TYPE VARCHAR(3)
SYSTEM VARCHAR(MAX)
PRICE DOUBLE
when I've just done vanilla
(my_dataframe.write.format("jdbc")
.option("url",sqlsUrl)
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("dbtable", table_name )
.option("user", username)
.option("password", password)
.save(mode=mode)
)
I get a different structure, biggest difference being varchar becomes nvarchar(max). Since the jobs on the server join to other tables that have varchar type, it means that jobs take hours that used to take minutes.
Ideally I'd like to specify the schema type and then save the table.
Follow up question. Am I using the most recent/correct JDBC driver?
I hope that makes sense.