How do I specify column types when writing to a MSSQL server using the JDBC driver (
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2022 09:50 AM
I have a pyspark dataframe that I'm writing to an on-prem MSSQL server--it's a stopgap while we convert data warehousing jobs over to databricks.
The processes that use those tables in the on-prem server rely on the tables maintaining the identical structure. I have a structure, for instance, like below.
SOURCE VARCHAR(3)
LOCATION_NO VARCHAR(3)
SKU_NO LONG
CASE_SKU_NO LONG
ITEM_TYPE VARCHAR(3)
SYSTEM VARCHAR(MAX)
PRICE DOUBLE
when I've just done vanilla
(my_dataframe.write.format("jdbc")
.option("url",sqlsUrl)
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("dbtable", table_name )
.option("user", username)
.option("password", password)
.save(mode=mode)
)
I get a different structure, biggest difference being varchar becomes nvarchar(max). Since the jobs on the server join to other tables that have varchar type, it means that jobs take hours that used to take minutes.
Ideally I'd like to specify the schema type and then save the table.
Follow up question. Am I using the most recent/correct JDBC driver?
I hope that makes sense.
- Labels:
-
ColumnType
-
Jdbc driver
-
Pyspark Dataframe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2023 07:50 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-05-2024 08:58 AM
I think I ended up doing a truncate and mode = "append" so the structure stayed the same.
For truncate I had to get some direct access from the sparkcontext/sc variable
driver_manager = spark._sc._gateway.jvm.java.sql.DriverManager
connection = driver_manager.getConnection(mssql_url, mssql_user, mssql_pass)
connection.prepareCall("TRUNCATE TABLE my_table").execute()
connection.close()

