<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Python Spark Job - error: job failed with error message The output of the notebook is too large. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24525#M17055</link>
    <description>&lt;P&gt;Hi databricks experts. I am currently facing a problem with a submitted job run on Azure Databricks. Any help on this is very welcome. See below for details:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Problem Description:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;I submitted a python spark task via the databricks cli (v0.16.4) to Azure Databricks REST API (v2.0) to run on a new job cluster. See atteched job.json for the cluster configuration. The job runs successfully and all outputs are generated as expected. Despite that, the job failes with an error message saying that "The output of the notebook is too large".&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My questions regarding this problem are:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Why is the job submitted as a spark python task displaying an error message related to notebook tasks ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Why is the job failing even though the log output does not exceed to limit ? (See below for details)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;What did I expect to see:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;Successful completion of the job with no errors&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;What did I see:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;The job failed with an Error Message displaying "Run result unavailable: job failed with error message The output of the notebook is too large."&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Already done steps:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;1. Consulted Azure and databricks documentation for a possible error cause. See:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-cluster-limit-nb-output" target="test_blank"&gt;https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-cluster-limit-nb-output&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/databricks/jobs#output-size-limits" target="test_blank"&gt;https://docs.microsoft.com/en-us/azure/databricks/jobs#output-size-limits&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;According to documentation this error occurs, if the stdout logs exceed 20 MB.&lt;/P&gt;&lt;P&gt;Actual stdout log output size: 1.8 MB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2. Increased py4j log level to reduce stdout log output&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;logging.getLogger("py4j.java_gateway").setLevel(logging.ERROR)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Reduced stdout log output size: 390 KB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;3. Used log4j to write application logs&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 28 Mar 2022 10:19:31 GMT</pubDate>
    <dc:creator>lukas_vlk</dc:creator>
    <dc:date>2022-03-28T10:19:31Z</dc:date>
    <item>
      <title>Python Spark Job - error: job failed with error message The output of the notebook is too large.</title>
      <link>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24525#M17055</link>
      <description>&lt;P&gt;Hi databricks experts. I am currently facing a problem with a submitted job run on Azure Databricks. Any help on this is very welcome. See below for details:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Problem Description:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;I submitted a python spark task via the databricks cli (v0.16.4) to Azure Databricks REST API (v2.0) to run on a new job cluster. See atteched job.json for the cluster configuration. The job runs successfully and all outputs are generated as expected. Despite that, the job failes with an error message saying that "The output of the notebook is too large".&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My questions regarding this problem are:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Why is the job submitted as a spark python task displaying an error message related to notebook tasks ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;- Why is the job failing even though the log output does not exceed to limit ? (See below for details)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;What did I expect to see:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;Successful completion of the job with no errors&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;What did I see:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;The job failed with an Error Message displaying "Run result unavailable: job failed with error message The output of the notebook is too large."&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Already done steps:&lt;/B&gt;&lt;/P&gt;&lt;P&gt;1. Consulted Azure and databricks documentation for a possible error cause. See:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-cluster-limit-nb-output" target="test_blank"&gt;https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-cluster-limit-nb-output&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/databricks/jobs#output-size-limits" target="test_blank"&gt;https://docs.microsoft.com/en-us/azure/databricks/jobs#output-size-limits&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;According to documentation this error occurs, if the stdout logs exceed 20 MB.&lt;/P&gt;&lt;P&gt;Actual stdout log output size: 1.8 MB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2. Increased py4j log level to reduce stdout log output&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;logging.getLogger("py4j.java_gateway").setLevel(logging.ERROR)&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Reduced stdout log output size: 390 KB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;3. Used log4j to write application logs&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Mar 2022 10:19:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24525#M17055</guid>
      <dc:creator>lukas_vlk</dc:creator>
      <dc:date>2022-03-28T10:19:31Z</dc:date>
    </item>
    <item>
      <title>Re: Python Spark Job - error: job failed with error message The output of the notebook is too large.</title>
      <link>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24526#M17056</link>
      <description>&lt;P&gt;Output is usually something related to print() collect() etc&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In the documentation which you mentioned is spark config command to remove totally stdout (spark.databricks.driver.disableScalaOutput true). I know that it is not what you want to use but maybe it could be helpful to diagnose is a problem with logs or script output.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Not many people use spark_python_task rather all use notebooks (eventually together with files in repos or wheel) so maybe someone from inside databricks would need to help.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Mar 2022 13:32:49 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24526#M17056</guid>
      <dc:creator>Hubert-Dudek</dc:creator>
      <dc:date>2022-03-28T13:32:49Z</dc:date>
    </item>
    <item>
      <title>Re: Python Spark Job - error: job failed with error message The output of the notebook is too large.</title>
      <link>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24527#M17057</link>
      <description>&lt;P&gt;Thanks for the answer &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;  &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;After writing the question, I tested using "spark.databricks.driver.disableScalaOutput": "true". Unfortunately this did not help with solving the problem. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regarding "collect()" we are running the job with 0 executers, as we are only using spark to load some parquet datasets that are then processed in python. We are however using "spark.sql.execution.arrow.pyspark.enabled": "true" to improve the performance during the conversion to pandas from the spark DataFrames. Increasing "spark.driver.memory" and "spark.driver.maxResultSize" did not help either.&lt;/P&gt;</description>
      <pubDate>Mon, 28 Mar 2022 14:44:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24527#M17057</guid>
      <dc:creator>lukas_vlk</dc:creator>
      <dc:date>2022-03-28T14:44:51Z</dc:date>
    </item>
    <item>
      <title>Re: Python Spark Job - error: job failed with error message The output of the notebook is too large.</title>
      <link>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24528#M17058</link>
      <description>&lt;P&gt;Without any further changes from my side, the error has disappeard since 29.03.2022&lt;/P&gt;</description>
      <pubDate>Wed, 30 Mar 2022 09:42:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/python-spark-job-error-job-failed-with-error-message-the-output/m-p/24528#M17058</guid>
      <dc:creator>lukas_vlk</dc:creator>
      <dc:date>2022-03-30T09:42:20Z</dc:date>
    </item>
  </channel>
</rss>

