cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Spark code not running bcz of incorrect compute size

ashraf1395
Contributor II

I have a dataset having 260 billion records

I need to group by 4 columns and find out the sum on four other columns

I increased the memory to e32 for driver and workers nodes, max workers is 40

The job still is stuck in this aggregate step where I’m writing it to the disk for persisting purpose.

Any solution?

We can use serverless compute but if I want to do it the normal way what can be the optimal size of cluster or some optimisations which I can do

 

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Hi @ashraf1395,To speed up the processing of your huge dataset—260 billion records grouped and summed across multiple columns—consider these straightforward tips:

  • Adding more nodes boosts processing power, but watch costs.
  • Allocate more cores per executor and increase their number for better parallel processing.
  • More memory per executor reduces data read/write times.
  • Properly partition your data based on grouping columns to minimize data movement.
  • Sort and organize data files based on key columns for faster access.
  • Fine-tune settings to reduce shuffle time and improve overall performance.
  • Break down aggregation into smaller steps for efficiency.
  • For occasional large tasks, AWS Athena or Google BigQuery might be cost-effective alternatives.

If you encounter further issues, feel free to ask for more targeted advice!

View solution in original post

2 REPLIES 2

Kaniz_Fatma
Community Manager
Community Manager

Hi @ashraf1395,To speed up the processing of your huge dataset—260 billion records grouped and summed across multiple columns—consider these straightforward tips:

  • Adding more nodes boosts processing power, but watch costs.
  • Allocate more cores per executor and increase their number for better parallel processing.
  • More memory per executor reduces data read/write times.
  • Properly partition your data based on grouping columns to minimize data movement.
  • Sort and organize data files based on key columns for faster access.
  • Fine-tune settings to reduce shuffle time and improve overall performance.
  • Break down aggregation into smaller steps for efficiency.
  • For occasional large tasks, AWS Athena or Google BigQuery might be cost-effective alternatives.

If you encounter further issues, feel free to ask for more targeted advice!

Rishabh_Tiwari
Community Manager
Community Manager

Hi @ashraf1395 ,

Thank you for reaching out to our community! We're here to help you. 

To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.

If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.

We appreciate your participation and are here to assist you further if you need it!

Thanks,

Rishabh

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group