When you run the jobs in Yarn, those are 2 different applications getting submitted on Yarn. Hence each application will have a separate Spark driver JVM's.
In Databricks, a cluster has one JVM for the Spark driver. When applications with the same name are submitted on the same JVM, it's possible the classes are loaded from the incorrect jars.
Mitigations/Solution:
- Use an on-demand cluster for your jobs. This will ensure one jar uses a dedicated cluster.
- Change the class name in one of the classes to avoid conflict.