<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Exponentially slower joins using Pyspark in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11909#M6824</link>
    <description>&lt;P&gt;Hi @Lee Bevers​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We'd love to hear from you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 06 Sep 2022 12:45:42 GMT</pubDate>
    <dc:creator>Vidula</dc:creator>
    <dc:date>2022-09-06T12:45:42Z</dc:date>
    <item>
      <title>Exponentially slower joins using Pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11906#M6821</link>
      <description>&lt;P&gt;I'm new to Pyspark, but I've stumbled across an odd issue when I perform joins, where the action seems to take exponentially longer every time I add a new join to a function I'm writing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm trying to join a dataset of ~3 million records to one of ~17 million ten times (each time with slightly different join criteria). Each join on it's own takes 15-50 seconds to commit, however when I add the joins together in one function, the action takes exponentially longer (e.g join 2 runs in a minute, but by join 5 the function takes about 11 minutes to run and by join 7/8 the notebook will run for hours and then give a generic cluster error).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've tried repartitioning and cacheing the data before joins, but if anything this seems to slow down the joins even further. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I can't work out what I've done wrong, and from QAing every line of the notebook, nothing obvious is jumping out.&lt;/P&gt;</description>
      <pubDate>Sat, 30 Jul 2022 06:03:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11906#M6821</guid>
      <dc:creator>datatello</dc:creator>
      <dc:date>2022-07-30T06:03:08Z</dc:date>
    </item>
    <item>
      <title>Re: Exponentially slower joins using Pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11907#M6822</link>
      <description>&lt;P&gt;Probably some bug in your function.&lt;/P&gt;&lt;P&gt;What I suggest is to first execute all the joins manually and run an explain to get the query plan.&lt;/P&gt;&lt;P&gt;Than compare that query plan to the one created by your function.&lt;/P&gt;&lt;P&gt;Especially if you do a loop in your function, it will probably be the culprit.&lt;/P&gt;</description>
      <pubDate>Mon, 01 Aug 2022 09:11:01 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11907#M6822</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2022-08-01T09:11:01Z</dc:date>
    </item>
    <item>
      <title>Re: Exponentially slower joins using Pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11908#M6823</link>
      <description>&lt;P&gt;Hi @Lee Bevers​,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Which DBR version are you using? could you share some code snippets? can you share the physical query plans? DAGs?&lt;/P&gt;</description>
      <pubDate>Wed, 17 Aug 2022 22:11:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11908#M6823</guid>
      <dc:creator>jose_gonzalez</dc:creator>
      <dc:date>2022-08-17T22:11:12Z</dc:date>
    </item>
    <item>
      <title>Re: Exponentially slower joins using Pyspark</title>
      <link>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11909#M6824</link>
      <description>&lt;P&gt;Hi @Lee Bevers​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We'd love to hear from you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Sep 2022 12:45:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/exponentially-slower-joins-using-pyspark/m-p/11909#M6824</guid>
      <dc:creator>Vidula</dc:creator>
      <dc:date>2022-09-06T12:45:42Z</dc:date>
    </item>
  </channel>
</rss>

