cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Need Suggestion for better caching strategy

vishwanath_1
New Contributor III

i have below steps to perform 

1.Read a csv file (considerably huge file .. ~100gb)

2.add index using zipwithindex function 

3.repartition dataframe 

4.Passing on to another function .

Can you suggest the best optimized caching strategy to execute these commands faster.

Below is the cluster configuration i have 

vishwanath_1_0-1705915220664.png

 

Few more queries :-

1. i always had doubt ,if using 1 worker would suffice for my operation ?

2. what is the optimal number to give for repartitioning here.

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Lakshay
Databricks Employee
Databricks Employee

Hi @vishwanath_1 , Caching only comes into picture when there are multiple reference to data source in your code. As per the flow mentioned by you, I don't see that being the case for you. You are only reading the data from source once and also there is no branching in your code. In this case, even if you use caching it will never be used.

Regarding your other queries:-

1. What is the optimal number of repartitions:- You should look to divide the data into chunks of 200MB-300MB size. Provided that you are reading (~100 GB) of data, so 100Gb/200MB =500 partitions. This is roughly how many partitions we should look to have.

2. if using 1 worker would suffice for my operation:- This depends on 3 things. 1. Data volume , 2. Type of operations and 3. Cluster config. As your 1 worker has 256 GB memory size and you are reading 100 GB of data and the operations being performed in code dont seem to be too much memory consuming, I think using 1 worker will be enough from a memory perspective.

But it can be a time-consuming process. As your single worker has only 64 cores and If you repartition the data into 500 partitions. So, at a time only 64 tasks can run. Hence, to complete a single stage it will take ~8 CPU cycles which might not be that efficient. 

View solution in original post

1 REPLY 1

Lakshay
Databricks Employee
Databricks Employee

Hi @vishwanath_1 , Caching only comes into picture when there are multiple reference to data source in your code. As per the flow mentioned by you, I don't see that being the case for you. You are only reading the data from source once and also there is no branching in your code. In this case, even if you use caching it will never be used.

Regarding your other queries:-

1. What is the optimal number of repartitions:- You should look to divide the data into chunks of 200MB-300MB size. Provided that you are reading (~100 GB) of data, so 100Gb/200MB =500 partitions. This is roughly how many partitions we should look to have.

2. if using 1 worker would suffice for my operation:- This depends on 3 things. 1. Data volume , 2. Type of operations and 3. Cluster config. As your 1 worker has 256 GB memory size and you are reading 100 GB of data and the operations being performed in code dont seem to be too much memory consuming, I think using 1 worker will be enough from a memory perspective.

But it can be a time-consuming process. As your single worker has only 64 cores and If you repartition the data into 500 partitions. So, at a time only 64 tasks can run. Hence, to complete a single stage it will take ~8 CPU cycles which might not be that efficient. 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group