I am trying to migrate some complex python load processes into databricks. Our load processes currently use pandas and we're hoping to refactor into Spark soon. For now, I need to figure out how to alter our functions that get sqlalchemy connection engines so I can bring our libraries that use sqlalchemy over to databricks. I see that there is a databricks sqlalchemy library, but there also seems to be a fairly strong option for connecting to Snowflake using a spark session and a JDBC(I think?) connector. Do these spark JDBC sessions work in a similar way to sqlalchemy connection sessions?
Has anyone ever done this before? What is the well-traveled path on this kind of migration?