cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks Vs Yarn - Resource Utilization

User16869510359
Esteemed Contributor

I have a spark-submit application that worked fine with 8GB executor memory in yarn. I am testing the same job against the Databricks cluster with the same executor memory. However, the jobs are running slower in Databricks. 

1 ACCEPTED SOLUTION

Accepted Solutions

User16869510359
Esteemed Contributor

This is not an Apple to Apple comparison. When you set 8GB as the executor memory in Yarn, then the container that is launched to run the executor JVM is getting 8GB of memory. Accordingly, the Xmx value of the heap is calculated. In Databricks, when you create a cluster with 8GB memory. The memory given to the executor JVM is less than that. This is because 8GB is the total instance memory. 

View solution in original post

1 REPLY 1

User16869510359
Esteemed Contributor

This is not an Apple to Apple comparison. When you set 8GB as the executor memory in Yarn, then the container that is launched to run the executor JVM is getting 8GB of memory. Accordingly, the Xmx value of the heap is calculated. In Databricks, when you create a cluster with 8GB memory. The memory given to the executor JVM is less than that. This is because 8GB is the total instance memory. 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.