<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Increase stack size Databricks in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/m-p/71922#M34437</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/9"&gt;@Retired_mod&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Thanks for your response. I tried this and unfortunately I could not get it to work.&lt;/P&gt;&lt;P&gt;When I set spark.databricks.driver.maxReplOutputLength to unlimited in the cluster configurations, I got this error message when running in the Notebook: &lt;EM&gt;Failure starting repl. Try detaching and re-attaching the notebook&lt;/EM&gt;. I tried detaching and re-attaching the cluster and continued to get the same message. Looking into it more, it looks like it has to be set to an integer value. I also tried this on the web terminal and I continued to get the segmentation fault error.&lt;/P&gt;&lt;P&gt;Next, I tried setting spark.databricks.driver.maxReplOutputLength to a very high number (e.g. 500000000) and received the same segmentation fault error when running it in the Notebook and web terminal.&lt;/P&gt;&lt;P&gt;Do you have any other ideas of things I could try?&lt;/P&gt;</description>
    <pubDate>Thu, 06 Jun 2024 15:45:14 GMT</pubDate>
    <dc:creator>tgen</dc:creator>
    <dc:date>2024-06-06T15:45:14Z</dc:date>
    <item>
      <title>Increase stack size Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/m-p/71492#M34325</link>
      <description>&lt;P&gt;Hi everyone&lt;/P&gt;&lt;P&gt;I'm currently running a shell script in a notebook, and I'm encountering a segmentation fault. This is due to the stack size limitation. I'd like to increase the stack size using ulimit -s unlimited, but I'm facing issues with setting this limit in the notebook environment.&lt;/P&gt;&lt;P&gt;I am using:&lt;/P&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;2-12 Workers&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;256-1,536&amp;nbsp;GB Memory&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;64-384&amp;nbsp;Cores&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;1 Driver&lt;/SPAN&gt;&lt;SPAN class=""&gt;256&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;GB Memory,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;64&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;Cores&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;Runtime&lt;/SPAN&gt;&lt;SPAN class=""&gt;15.2.x-scala2.12&lt;/SPAN&gt;&lt;/DIV&gt;&lt;P&gt;Could anyone provide guidance on how to properly increase the stack size for my shell script using Notebooks in Databricks? Any tips or alternative solutions to avoid the segmentation fault would also be greatly appreciated.&lt;/P&gt;</description>
      <pubDate>Mon, 03 Jun 2024 17:34:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/m-p/71492#M34325</guid>
      <dc:creator>tgen</dc:creator>
      <dc:date>2024-06-03T17:34:57Z</dc:date>
    </item>
    <item>
      <title>Re: Increase stack size Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/m-p/71922#M34437</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/9"&gt;@Retired_mod&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Thanks for your response. I tried this and unfortunately I could not get it to work.&lt;/P&gt;&lt;P&gt;When I set spark.databricks.driver.maxReplOutputLength to unlimited in the cluster configurations, I got this error message when running in the Notebook: &lt;EM&gt;Failure starting repl. Try detaching and re-attaching the notebook&lt;/EM&gt;. I tried detaching and re-attaching the cluster and continued to get the same message. Looking into it more, it looks like it has to be set to an integer value. I also tried this on the web terminal and I continued to get the segmentation fault error.&lt;/P&gt;&lt;P&gt;Next, I tried setting spark.databricks.driver.maxReplOutputLength to a very high number (e.g. 500000000) and received the same segmentation fault error when running it in the Notebook and web terminal.&lt;/P&gt;&lt;P&gt;Do you have any other ideas of things I could try?&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jun 2024 15:45:14 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/m-p/71922#M34437</guid>
      <dc:creator>tgen</dc:creator>
      <dc:date>2024-06-06T15:45:14Z</dc:date>
    </item>
  </channel>
</rss>

