cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Are there any recommended spark config settings for Delta/Databricks?

alejandrofm
Valued Contributor

Hi! I'm starting to test configs on DataBricks, for example, to avoid corrupting data if two processes try to write at the same time:

.config('spark.databricks.delta.multiClusterWrites.enabled', 'false')

Or if I need more partitions than default

.config('spark.databricks.adaptive.autoOptimizeShuffle.enabled', 'true')

Is there another recommended default setting? (then goes the tunning for each job)

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

Ryan_Chynoweth
Honored Contributor III

Delta tables do have optimistic concurrency control. So if two processes are trying to write to the same table it does its best to handle both but if the transactions are conflicts then it will fail. You can also change the isolation levels if you want to enforce more control on that.

View solution in original post

5 REPLIES 5

Ryan_Chynoweth
Honored Contributor III

Delta tables do have optimistic concurrency control. So if two processes are trying to write to the same table it does its best to handle both but if the transactions are conflicts then it will fail. You can also change the isolation levels if you want to enforce more control on that.

Hubert-Dudek
Esteemed Contributor III

Exactly. You can easy verify that as commits are written to separate files in delta log.

Regarding:

.config('spark.databricks.adaptive.autoOptimizeShuffle.enabled', 'true')

and other spark optimization solutions please watch databricks video about that https://www.youtube.com/watch?v=daXEp4HmS-E

Kaniz
Community Manager
Community Manager

Hi @Alejandro Martinez​ , How is it going? Did the docs help you anyhow?

alejandrofm
Valued Contributor

It helped but still testing different configurations, thank you!

Anonymous
Not applicable

Hey there @Alejandro Martinez​ 

Hope everything is going well.

Just wanted to see if you were able to find an answer to your question. If yes, would you be happy to let us know and mark it as best so that other members can find the solution more quickly?

Cheers!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.