cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

Spark code not running bcz of incorrect compute size

ashraf1395
Contributor II

I have a dataset having 260 billion records

I need to group by 4 columns and find out the sum on four other columns

I increased the memory to e32 for driver and workers nodes, max workers is 40

The job still is stuck in this aggregate step where Iā€™m writing it to the disk for persisting purpose.

Any solution?

We can use serverless compute but if I want to do it the normal way what can be the optimal size of cluster or some optimisations which I can do

 

1 REPLY 1

Rishabh_Tiwari
Databricks Employee
Databricks Employee

Hi @ashraf1395 ,

Thank you for reaching out to our community! We're here to help you. 

To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.

If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.

We appreciate your participation and are here to assist you further if you need it!

Thanks,

Rishabh

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonā€™t want to miss the chance to attend and share knowledge.

If there isnā€™t a group near you, start one and help create a community that brings people together.

Request a New Group