My general question:
Does a serverless compute job automatically scale?
The reason I try serverless job - with Performance optimization Disabled option is to make job run effortless and cost effective.
I don't like to do any tuning on spark at all. I did not use any special serverless job compute config and left all by default in azure databricks.
any suggestions?
I have a table that is normal sized, but there are other process running in same serverless compute, sometime job fail with this error
Failed to stage table A to B / Job aborted due to stage failure: java.lang.RuntimeException: During hash join, the build side was too large to fit in memory (1935662 rows, 765114406 bytes), and Photon was unable to partition it. Many rows likely share the same key. Try disabling Photon by adding set spark.databricks.photon.enabled=false; to your query.
