<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5930#M2193</link>
    <description>&lt;P&gt;Your question is bit elaborative Optimising requires multiple inputs You can start with this doc &lt;A href="https://docs.databricks.com/optimizations/index.html" target="test_blank"&gt;https://docs.databricks.com/optimizations/index.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;If you ask something specific i can elaborate&lt;/P&gt;</description>
    <pubDate>Fri, 14 Apr 2023 07:14:00 GMT</pubDate>
    <dc:creator>Avinash_94</dc:creator>
    <dc:date>2023-04-14T07:14:00Z</dc:date>
    <item>
      <title>I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.</title>
      <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5929#M2192</link>
      <description>&lt;P&gt;I will elaborate my problem. I am using a 6-node Spark cluster over Hadoop Yarn out of which one node acts as the master and the other 5 are acting as worker nodes. I am running my Spark application over the cluster. After completion, when I check the Spark UI, I observe a longer execution time due to longer Scheduler Delay and Task Deserialization Time even though the Executor Computing Time is very low. The total running time is 81 sec when it should complete in less than 8 sec. I could not get help from any existing posts on the net. I wish someone could help me solve this. What is the way to reduce both Scheduler Delay and Task Deserialization Time. Is the issue due to the sub-optimal way of writing code or due to bad configuration of Yarn and Spark? I attach a few images below. I will share any other things required for further analysis like Yarn, Spark configuration, application code etc. if necessary. Thanks in advance.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="01_Jobs"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/359i271AEF177E36A52E/image-size/large?v=v2&amp;amp;px=999" role="button" title="01_Jobs" alt="01_Jobs" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="02_DAG_and_Metrics"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/366i98F7F30988D025A0/image-size/large?v=v2&amp;amp;px=999" role="button" title="02_DAG_and_Metrics" alt="02_DAG_and_Metrics" /&gt;&lt;/span&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="03_Event_Timeline"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/374iF7C9BB1AD0AC0CD8/image-size/large?v=v2&amp;amp;px=999" role="button" title="03_Event_Timeline" alt="03_Event_Timeline" /&gt;&lt;/span&gt;﻿&lt;span class="lia-inline-image-display-wrapper" image-alt="04_Tasks"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/365i6EA5E3B4312B9077/image-size/large?v=v2&amp;amp;px=999" role="button" title="04_Tasks" alt="04_Tasks" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Apr 2023 10:36:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5929#M2192</guid>
      <dc:creator>T__V__K__Hanuma</dc:creator>
      <dc:date>2023-04-13T10:36:27Z</dc:date>
    </item>
    <item>
      <title>Re: I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.</title>
      <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5930#M2193</link>
      <description>&lt;P&gt;Your question is bit elaborative Optimising requires multiple inputs You can start with this doc &lt;A href="https://docs.databricks.com/optimizations/index.html" target="test_blank"&gt;https://docs.databricks.com/optimizations/index.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;If you ask something specific i can elaborate&lt;/P&gt;</description>
      <pubDate>Fri, 14 Apr 2023 07:14:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5930#M2193</guid>
      <dc:creator>Avinash_94</dc:creator>
      <dc:date>2023-04-14T07:14:00Z</dc:date>
    </item>
    <item>
      <title>Re: I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.</title>
      <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5931#M2194</link>
      <description>&lt;P&gt;Most of the optimisations can be done while selecting the number of partitions we can to create for data, too many would cause a large shuffle operation on wide dependency operations and too less would cause less parallelisation.  To minimise the time taken during shuffle operations , use zordering so that data having high chances of falling under the same aggregation, are located on the same or nearby partitions. &lt;/P&gt;</description>
      <pubDate>Fri, 14 Apr 2023 10:15:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5931#M2194</guid>
      <dc:creator>Pallav</dc:creator>
      <dc:date>2023-04-14T10:15:43Z</dc:date>
    </item>
    <item>
      <title>Re: I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.</title>
      <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5932#M2195</link>
      <description>&lt;P&gt;Hi @T. V. K. Hanuman​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope everything is going great.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 16 Apr 2023 04:58:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5932#M2195</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2023-04-16T04:58:12Z</dc:date>
    </item>
    <item>
      <title>Re: I am struggling to optimize my Spark Application Code. Is there someone who can assist me in optimizing it? I am using Spark over Hadoop Yarn.</title>
      <link>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5933#M2196</link>
      <description>&lt;P&gt;Hi @Vidula Khanna​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My problem is not yet solved.&lt;/P&gt;</description>
      <pubDate>Mon, 24 Apr 2023 10:16:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/i-am-struggling-to-optimize-my-spark-application-code-is-there/m-p/5933#M2196</guid>
      <dc:creator>T__V__K__Hanuma</dc:creator>
      <dc:date>2023-04-24T10:16:16Z</dc:date>
    </item>
  </channel>
</rss>

