Hi @vishwanath_1 , Caching only comes into picture when there are multiple reference to data source in your code. As per the flow mentioned by you, I don't see that being the case for you. You are only reading the data from source once and also there is no branching in your code. In this case, even if you use caching it will never be used.
Regarding your other queries:-
1. What is the optimal number of repartitions:- You should look to divide the data into chunks of 200MB-300MB size. Provided that you are reading (~100 GB) of data, so 100Gb/200MB =500 partitions. This is roughly how many partitions we should look to have.
2. if using 1 worker would suffice for my operation:- This depends on 3 things. 1. Data volume , 2. Type of operations and 3. Cluster config. As your 1 worker has 256 GB memory size and you are reading 100 GB of data and the operations being performed in code dont seem to be too much memory consuming, I think using 1 worker will be enough from a memory perspective.
But it can be a time-consuming process. As your single worker has only 64 cores and If you repartition the data into 500 partitions. So, at a time only 64 tasks can run. Hence, to complete a single stage it will take ~8 CPU cycles which might not be that efficient.