cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How do I specify column types when writing to a MSSQL server using the JDBC driver (

jonathan-dufaul
Valued Contributor

I have a pyspark dataframe that I'm writing to an on-prem MSSQL server--it's a stopgap while we convert data warehousing jobs over to databricks.

The processes that use those tables in the on-prem server rely on the tables maintaining the identical structure. I have a structure, for instance, like below.

SOURCE	VARCHAR(3)
LOCATION_NO	VARCHAR(3)
SKU_NO	LONG
CASE_SKU_NO	LONG
ITEM_TYPE	VARCHAR(3)
SYSTEM	VARCHAR(MAX)
PRICE	DOUBLE

when I've just done vanilla

  (my_dataframe.write.format("jdbc") 
    .option("url",sqlsUrl) 
    .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") 
    .option("dbtable", table_name )
    .option("user", username) 
    .option("password", password) 
    .save(mode=mode)
  )

I get a different structure, biggest difference being varchar becomes nvarchar(max). Since the jobs on the server join to other tables that have varchar type, it means that jobs take hours that used to take minutes.

Ideally I'd like to specify the schema type and then save the table.

Follow up question. Am I using the most recent/correct JDBC driver?

I hope that makes sense.

1 REPLY 1

dasanro
New Contributor II

It's happenging to me too!

Did you find any solution @jonathan-dufaul  ?

Thanks!!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.