Compared to OSS Spark, these are few things the users don't have to worry about when running the same job on Databricks.
- Memory management: Databricks use an internal formula to allocate the Driver and executor heap based on the size of the instance. So it's safe to remove the configurations like spark.driver.memory, spark.executor.memory.
- The number of executors: Number of executors, tasks per executors, the total number of executors-related configurations are not necessary when running jobs in Datatbicks. By default we launch one executor per instance. And the number of tasks that can run on the executor will be equal to the total number of cores on the instance.
- Avoid using spark.stop, sc.stop : Databricks manages the cluster shutdown on its own, so stopping the context or spark session is not required.