cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Pipelines with alot of Spark Caching - best practices for cleanup?

Michael_Galli
Contributor II

We have the situation where many concurrent Azure Datafactory Notebooks are running in one single Databricks Interactive Cluster (Azure E8 Series Driver, 1-10 E4 Series Drivers autoscaling).

Each notebook reads data, does a dataframe.cache(), just to create some counts before / after running a dropDuplicates() for logging as metrics / data quality purposes, like SSIS.

After a few hours, the jobs on the cluster will fail, and the cluster needs a reboot. I think the caching is the reason.

Is it recommended to use spark.catalog.clearCache() at the end of each notebook (does this affect other running jobs on the cluster?), or are there other ideas for better cluster cache cleanup?

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

This cache is dynamically saved to disk if there is no place in memory. So I don't see it as an issue. However, the best practice is to use "unpersist()" method in your code after caching. As in the example below, my answer, the cache/persist method is only proper when running multiple operations on the same data frame. You don't need it in other situations, as Spark 3.0 with AQE and optimizations will handle it better. Please check logs why the cluster is failing.

df_cached = df.persist()
# here is good to cache as we will use df_cached  multiple times
df1 = df_cached.some_actions_and_transformations
df2 = df_cached.some_actions_and_transformations
df3 = df_cached.some_actions_and_transformations
# clean cache
df_cached.unpersist()

View solution in original post

1 REPLY 1

Hubert-Dudek
Esteemed Contributor III

This cache is dynamically saved to disk if there is no place in memory. So I don't see it as an issue. However, the best practice is to use "unpersist()" method in your code after caching. As in the example below, my answer, the cache/persist method is only proper when running multiple operations on the same data frame. You don't need it in other situations, as Spark 3.0 with AQE and optimizations will handle it better. Please check logs why the cluster is failing.

df_cached = df.persist()
# here is good to cache as we will use df_cached  multiple times
df1 = df_cached.some_actions_and_transformations
df2 = df_cached.some_actions_and_transformations
df3 = df_cached.some_actions_and_transformations
# clean cache
df_cached.unpersist()

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.