<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Lazy evaluation in serverless vs all purpose compute ? in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115517#M9365</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/153274"&gt;@aniket07&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With Serverless compute, Spark uses lazy evaluation and only checks if the path exists when you perform an action (like display()), so the error appears then. On the other hand, in All-Purpose clusters, Spark checks the path immediately when you create the DataFrame, so you see the error right away.&lt;BR /&gt;&lt;BR /&gt;This difference is due to how each environment handles path validation and when they access storage.&lt;/P&gt;</description>
    <pubDate>Tue, 15 Apr 2025 13:09:00 GMT</pubDate>
    <dc:creator>SP_6721</dc:creator>
    <dc:date>2025-04-15T13:09:00Z</dc:date>
    <item>
      <title>Lazy evaluation in serverless vs all purpose compute ?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115459#M9364</link>
      <description>&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="aniket07_0-1744691152378.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/15991iC7B6EBC9130E38E8/image-size/medium?v=v2&amp;amp;px=400" role="button" title="aniket07_0-1744691152378.png" alt="aniket07_0-1744691152378.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="aniket07_1-1744691251247.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/15992iEECAA766B55AE209/image-size/medium?v=v2&amp;amp;px=400" role="button" title="aniket07_1-1744691251247.png" alt="aniket07_1-1744691251247.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="aniket07_2-1744691310065.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/15993i216965A5276858E0/image-size/medium?v=v2&amp;amp;px=400" role="button" title="aniket07_2-1744691310065.png" alt="aniket07_2-1744691310065.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;As you can see right now I am connected to serverless compute and when I give wrong path, spark does lazy evaluation and gives error on display.&amp;nbsp;&lt;BR /&gt;However, when I switch from serverless to my all purpose cluster I get the error when I create the df itself.&lt;BR /&gt;Why is that?&lt;/P&gt;</description>
      <pubDate>Tue, 15 Apr 2025 04:29:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115459#M9364</guid>
      <dc:creator>aniket07</dc:creator>
      <dc:date>2025-04-15T04:29:36Z</dc:date>
    </item>
    <item>
      <title>Re: Lazy evaluation in serverless vs all purpose compute ?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115517#M9365</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/153274"&gt;@aniket07&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With Serverless compute, Spark uses lazy evaluation and only checks if the path exists when you perform an action (like display()), so the error appears then. On the other hand, in All-Purpose clusters, Spark checks the path immediately when you create the DataFrame, so you see the error right away.&lt;BR /&gt;&lt;BR /&gt;This difference is due to how each environment handles path validation and when they access storage.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Apr 2025 13:09:00 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115517#M9365</guid>
      <dc:creator>SP_6721</dc:creator>
      <dc:date>2025-04-15T13:09:00Z</dc:date>
    </item>
    <item>
      <title>Re: Lazy evaluation in serverless vs all purpose compute ?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115573#M9366</link>
      <description>&lt;P&gt;Based on the scenario, what&amp;nbsp;&lt;A href="https://community.databricks.com/t5/user/viewprofilepage/user-id/156441" target="_blank"&gt;https://community.databricks.com/t5/user/viewprofilepage/user-id/156441&lt;/A&gt;&amp;nbsp;saying is correct though the eager evaluation property is false in both cases and for&amp;nbsp;&lt;SPAN&gt;All-Purpose clusters, Spark is checking the path immediately when you create the DataFrame.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Apr 2025 18:01:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/lazy-evaluation-in-serverless-vs-all-purpose-compute/m-p/115573#M9366</guid>
      <dc:creator>sridharplv</dc:creator>
      <dc:date>2025-04-15T18:01:10Z</dc:date>
    </item>
  </channel>
</rss>

