Hi! I'm starting to test configs on DataBricks, for example, to avoid corrupting data if two processes try to write at the same time:
.config('spark.databricks.delta.multiClusterWrites.enabled', 'false')
Or if I need more partitions than default
.config('spark.databricks.adaptive.autoOptimizeShuffle.enabled', 'true')
Is there another recommended default setting? (then goes the tunning for each job)
Thanks!