The current workflow function in Databricks gives a series of options such as DLT, Dbt, python scripts, python files, JAR, etc. It would be good to add a docker file to that and simplify the development process a lot, especially on the unit and integration testing use side.
We might argue that the spark engine optimizes the code before the run however not every use case needs a powerful running engine such as a Spark or Spark engine could consider the docker image as a processing unit with input and output with no optimization required.