cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How do I specify column types when writing to a MSSQL server using the JDBC driver (

jonathan-dufaul
Valued Contributor

I have a pyspark dataframe that I'm writing to an on-prem MSSQL server--it's a stopgap while we convert data warehousing jobs over to databricks.

The processes that use those tables in the on-prem server rely on the tables maintaining the identical structure. I have a structure, for instance, like below.

SOURCE	VARCHAR(3)
LOCATION_NO	VARCHAR(3)
SKU_NO	LONG
CASE_SKU_NO	LONG
ITEM_TYPE	VARCHAR(3)
SYSTEM	VARCHAR(MAX)
PRICE	DOUBLE

when I've just done vanilla

  (my_dataframe.write.format("jdbc") 
    .option("url",sqlsUrl) 
    .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") 
    .option("dbtable", table_name )
    .option("user", username) 
    .option("password", password) 
    .save(mode=mode)
  )

I get a different structure, biggest difference being varchar becomes nvarchar(max). Since the jobs on the server join to other tables that have varchar type, it means that jobs take hours that used to take minutes.

Ideally I'd like to specify the schema type and then save the table.

Follow up question. Am I using the most recent/correct JDBC driver?

I hope that makes sense.

1 REPLY 1

dasanro
New Contributor II

It's happenging to me too!

Did you find any solution @jonathan-dufaul  ?

Thanks!!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group