cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Why Spark Save Modes , "overwrite" always drops table although "truncate" is true ?

AkifCakir
New Contributor

Hi Dear Team,

I am trying to import data from databricks to Exasol DB.

I am using following code in below with Spark version is 3.0.1 ,

dfw.write \
    .format("jdbc") \
    .option("driver", exa_driver) \
    .option("url", exa_url) \
    .option("dbtable", "table") \
    .option("user", username) \
    .option("password", exa_password) \
    .option("truncate", "true") \
    .option("numPartitions", "1") \
    .option("fetchsize", "100000") \
    .mode("overwrite") \
    .save()

The problem is when mode is "overwrite", it always drop target table in Exasol db, although in the spark documentation (https://spark.apache.org/docs/3.0.1/sql-data-sources-jdbc.html#content) it says for "truncate" option that

truncate --> This is a JDBC writer related option. When SaveMode.Overwrite is enabled, this option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient, and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. It defaults to false. This option applies only to writing.

According to this explanation , I would expect with option("truncate", "true"), it should not drop but truncate the table. Nevertheless it drops the table even in that case. Note: we can have separate truncate command and go with append mode but I do not want to have extra second command but solve in one command as suggested in Exasol documentation here (https://github.com/exasol/spark-exasol-connector/blob/main/doc/user_guide/user_guide.md#spark-save-m...) as well.

Am I missing something or do you have any resolution ?

1 ACCEPTED SOLUTION

Accepted Solutions

jose_gonzalez
Moderator
Moderator

Hi @Akif Cakir​ ,

You are correct, this is the expected behavior when using JDBC connector. Docs here

Have you try to use the "exasol" connector instead of JDBC? do you also get this same behavior?

View solution in original post

4 REPLIES 4

Kaniz
Community Manager
Community Manager

Hi @ AkifCakir! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.

jose_gonzalez
Moderator
Moderator

Hi @Akif Cakir​ ,

You are correct, this is the expected behavior when using JDBC connector. Docs here

Have you try to use the "exasol" connector instead of JDBC? do you also get this same behavior?

mick042
New Contributor III

Facing the same problem I used the following:

sfOptions = {

"sfURL" : "<account>.snowflakecomputing.com",

"sfAccount" : "<account>",

"sfUser" : "<user>",

"sfPassword" : "***",

"sfDatabase" : "<database>",

"sfSchema" : "<schema>",

"sfWarehouse" : "<warehouse>",

"truncate_table" : "ON",

"usestagingtable" : "OFF",

}

https://community.snowflake.com/s/article/How-to-Load-Data-in-Spark-with-Overwrite-mode-without-Chan...

Gembo
New Contributor II

@AkifCakir , Were you able to find a way to truncate without dropping the table using the .write function as I am facing the same issue as well.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.