cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.

T__V__K__Hanuma
New Contributor II

I will elaborate my problem. I am using a 6-node Spark cluster over Hadoop Yarn out of which one node acts as the master and the other 5 are acting as worker nodes. I am running my Spark application over the cluster. After completion, when I check the Spark UI, I observe a longer execution time due to longer Scheduler Delay and Task Deserialization Time even though the Executor Computing Time is very low. The total running time is 81 sec when it should complete in less than 8 sec. I could not get help from any existing posts on the net. I wish someone could help me solve this. What is the way to reduce both Scheduler Delay and Task Deserialization Time. Is the issue due to the sub-optimal way of writing code or due to bad configuration of Yarn and Spark? I attach a few images below. I will share any other things required for further analysis like Yarn, Spark configuration, application code etc. if necessary. Thanks in advance.

01_Jobs 

02_DAG_and_Metrics 

03_Event_Timeline04_Tasks

4 REPLIES 4

Avinash_94
New Contributor III

Your question is bit elaborative Optimising requires multiple inputs You can start with this doc https://docs.databricks.com/optimizations/index.html

If you ask something specific i can elaborate

Pallav
New Contributor II

Most of the optimisations can be done while selecting the number of partitions we can to create for data, too many would cause a large shuffle operation on wide dependency operations and too less would cause less parallelisation. To minimise the time taken during shuffle operations , use zordering so that data having high chances of falling under the same aggregation, are located on the same or nearby partitions.

Anonymous
Not applicable

Hi @T. V. K. Hanuman​ 

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

Hi @Vidula Khanna​ 

My problem is not yet solved.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.