Databricks Jobs do not run on job compute but on shared compute

RobinK
Contributor

Hello,
since last night none of our ETL jobs in Databricks are running anymore, although we have not made any code changes.

The identical jobs (deployed with Databricks asset bundles) run on an all-purpose cluster, but fail on a job cluster. We have not changed anything in the cluster configuration. The Databricks runtime version is also identical (14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)). We have also compared the code and double-checked the configurations.
What could be the reason for the jobs failing without us having made any changes? Have there been changes to Databricks that cause this?

Error messages:
[NOT_COLUMN] Argument `col` should be a Column, got Column.
[SESSION_ALREADY_EXIST] Cannot start a remote Spark session because there is a regular Spark session already running.

Does anyone else have problems with jobs?

Best regards
Robin