cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

Spark code not running bcz of incorrect compute size

ashraf1395
Contributor II

I have a dataset having 260 billion records

I need to group by 4 columns and find out the sum on four other columns

I increased the memory to e32 for driver and workers nodes, max workers is 40

The job still is stuck in this aggregate step where Iā€™m writing it to the disk for persisting purpose.

Any solution?

We can use serverless compute but if I want to do it the normal way what can be the optimal size of cluster or some optimisations which I can do

 

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Hi @ashraf1395,To speed up the processing of your huge datasetā€”260 billion records grouped and summed across multiple columnsā€”consider these straightforward tips:

  • Adding more nodes boosts processing power, but watch costs.
  • Allocate more cores per executor and increase their number for better parallel processing.
  • More memory per executor reduces data read/write times.
  • Properly partition your data based on grouping columns to minimize data movement.
  • Sort and organize data files based on key columns for faster access.
  • Fine-tune settings to reduce shuffle time and improve overall performance.
  • Break down aggregation into smaller steps for efficiency.
  • For occasional large tasks, AWS Athena or Google BigQuery might be cost-effective alternatives.

If you encounter further issues, feel free to ask for more targeted advice!

View solution in original post

2 REPLIES 2

Kaniz_Fatma
Community Manager
Community Manager

Hi @ashraf1395,To speed up the processing of your huge datasetā€”260 billion records grouped and summed across multiple columnsā€”consider these straightforward tips:

  • Adding more nodes boosts processing power, but watch costs.
  • Allocate more cores per executor and increase their number for better parallel processing.
  • More memory per executor reduces data read/write times.
  • Properly partition your data based on grouping columns to minimize data movement.
  • Sort and organize data files based on key columns for faster access.
  • Fine-tune settings to reduce shuffle time and improve overall performance.
  • Break down aggregation into smaller steps for efficiency.
  • For occasional large tasks, AWS Athena or Google BigQuery might be cost-effective alternatives.

If you encounter further issues, feel free to ask for more targeted advice!

Rishabh_Tiwari
Community Manager
Community Manager

Hi @ashraf1395 ,

Thank you for reaching out to our community! We're here to help you. 

To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.

If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.

We appreciate your participation and are here to assist you further if you need it!

Thanks,

Rishabh

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!