The Databricks Runtime and Apache Spark use the same base API. One can create Spark jobs that run locally and have them run on Databricks with all available Databricks features.
It is required that one uses SparkSession.builder.getOrCreate() to create the SparkSession. The SparkSession is created in the Databricks environment and is treated as a singleton.
In addition, one can also test using databricks connect. Databricks connect replaces Apache Spark/Pyspark on your local machine and allows for your local machine to execute jobs on a Databricks cluster. https://docs.databricks.com/dev-tools/databricks-connect.html