I am using managed databricks on gcp. I have 11TB of data with 5B rows. Data from source is not partitioned. I'm having trouble loading the data into dataframe and do further data processing. I have tried couple of executors configuration , none of them seem to work. Can you guide me to best practise to load huge data into dataframe.
Data is in nested json format. Schema is not consistence across document. Source of data is mongoDB.
Things which I have tried already :
n1-standard-4 executors 20 - Job aborted after 2+ hours
n1-standard-8 executors 8 - Job aborted after 2 + hours
I know these are not best practises but I also tried setting the below spark config:
spark.executor.memory 0
spark.driver.memory 0
spark.driver.maxResultSize 0
I want to know what should be the right executor size, machine type , spark config to be used for my use case . Any suggestion which helps us save credits would be an added advantage. We plan to run data quality check for this data so we will be looking for reading the entire dataset.
Thanks in advance.