If your job fails follow this:
According to
https://docs.databricks.com/jobs.html#jar-job-tips:
"Job output, such as log output emitted to stdout, is subject to a 20MB size limit. If the total output has a larger size, the run will be canceled and marked as failed."
That was my problem, to "
fix it" I've just set the logging level to ERROR
val sc = SparkContext.getOrCreate(conf)
sc.setLogLevel("ERROR")
This workaround works for me
I still get this ERROR messages but the job runs successfully
I hope it helps