cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How can I enable disk cache in this scenario/

anupam676
New Contributor II

I have a notebook where I read multiple tables from delta lake (let say schema is db) and after that I did some sort of transformation (image enclosed) using all these tables lwith transformations like join,filter etc. After transformation and writing it to delta table, I am getting insight from databricks to use disk cache(image enclosed). In this scenario how can I use disk cache. I used 

spark.conf.get("spark.databricks.io.cache.enabled","true") for disk cache but still getting the same insight.
Also, whenever I am trying to write the final DF to any table in delta lake getting the same insight that use disk cache.
How can I fix this. Is there any other optimization technique I can adapt rather than this.
Please check the image enclosed with it.
 
 
1 ACCEPTED SOLUTION

Accepted Solutions

shan_chandra
Honored Contributor III
Honored Contributor III

@anupam676 - could you please use set function instead of get 

spark.conf.set("spark.databricks.io.cache.enabled","true"

View solution in original post

2 REPLIES 2

shan_chandra
Honored Contributor III
Honored Contributor III

@anupam676 - could you please use set function instead of get 

spark.conf.set("spark.databricks.io.cache.enabled","true"

anupam676
New Contributor II

Thank you @shan_chandra 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.