cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Overwrite Using pyspark's JDBC without loosing constraints on table columns

Abeeya
New Contributor II

Hello,

My table has primary key constraint on a perticular column, Im loosing primary key constaint on that column each time I overwrite the table , What Can I do to preserve it? Any Heads up would be appreciated

Tried Below

df.write.option("truncate", "true").jdbc(url=DATABASE_URL, table=DATABASE_TABLE, mode="overwrite", properties=DATABASE_PROPERTIES)
 

But It errored out as below when a new column was added to dataframe which was not there in table

AnalysisException: Column "new_col" not found in schema Some

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

@Abeeya .​ , Mode "truncate", is correct to preserve the table. However, when you want to add a new column (mismatched schema), it wants to drop it anyway.

View solution in original post

2 REPLIES 2

Hubert-Dudek
Esteemed Contributor III

@Abeeya .​ , Mode "truncate", is correct to preserve the table. However, when you want to add a new column (mismatched schema), it wants to drop it anyway.

Kaniz
Community Manager
Community Manager

Hi @Abeeya .​ , How are you? Did @Hubert Dudek​ 's answer help you in any way? Please let us know.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.