I had some problems getting much speedup at all from spark or DB disk cache, which I think is essential when developing PySpark code iteratively in notebooks. So I developed a handy caching-library for this which has recently been open sourced, see https://github.com/schibsted/dbfs-spark-cache . This adds support for remote caching through an explicit method to the pyspak DataFrame, which previousely was only supported for SQL UI cache . Proper use of remote dbfs caching also seems to avoid the slow queries and poor worker utilization that you often get after complex queries with multiple joins.
I'd be interested to know if others in the Databricks community will find this useful.