cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Need help with loading 11 TB data into spark dataframe using managed gcp databricks.

sh23
New Contributor II

I am using managed databricks on gcp. I have 11TB of data with 5B rows. Data from source is not partitioned. I'm having trouble loading the data into dataframe and do further data processing. I have tried couple of executors configuration , none of them seem to work. Can you guide me to best practise to load huge data into dataframe.

Data is in nested json format. Schema is not consistence across document. Source of data is mongoDB.

Things which I have tried already :

n1-standard-4 executors 20 - Job aborted after 2+ hours

n1-standard-8 executors 8 - Job aborted after 2 + hours

I know these are not best practises but I also tried setting the below spark config:

spark.executor.memory 0

spark.driver.memory 0

spark.driver.maxResultSize 0

I want to know what should be the right executor size, machine type , spark config to be used for my use case . Any suggestion which helps us save credits would be an added advantage. We plan to run data quality check for this data so we will be looking for reading the entire dataset.

Thanks in advance.

1 REPLY 1

Hi @Shruti Sโ€‹,

Just a friendly follow-up. Are you getting any errors? please share the error stack trace so we can help you to narrow down the RCA for this issue.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group