Databricks does not host the jars in its own maven repository. The jars from OSS can be used to compile the application. The OSS jars should not be used at the execution time. ie: the application jars should not be fat jars, but rather thin jars.
If internal Spark APIs are used, then it's possible the Databricks version of those classes has a different method, and compiling your application against the OSS jars causes Classpath issues. In such scenarios the jars available on the cluster should be sourced from your local and used for compilation. The jars are available at :
https://docs.databricks.com/dev-tools/databricks-connect.html#intellij-scala-or-java