<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX' in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/83068#M36836</link>
    <description>&lt;P&gt;Did you find the solution?&lt;/P&gt;</description>
    <pubDate>Thu, 15 Aug 2024 09:26:31 GMT</pubDate>
    <dc:creator>iwxshubham</dc:creator>
    <dc:date>2024-08-15T09:26:31Z</dc:date>
    <item>
      <title>org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27538#M19404</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I am trying to create a JAR for a Azure Databricks job but some code that works when using the notebook interface does not work when calling the library through a job. The weird part is that the job will complete the first run successfully but on any subsequent runs, it will fail. I have to restart my cluster to get it to run and then it will fail again on the second run.&lt;/P&gt;
&lt;P&gt;I have created a view on a dataframe :&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;val df = spark.read.parquet(path)
df.createOrReplaceTempView("table1")&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;However, when I go to query the view with an aggregate function it yields an error:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;val get_max_id_array = spark.sql("SELECT MAX(%s) FROM table1".format(get_id_column_array(0))).first()&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Error:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;ERROR Uncaught throwable from user code: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 7
&lt;/CODE&gt;&lt;/PRE&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Nov 2019 20:59:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27538#M19404</guid>
      <dc:creator>TylerTamasaucka</dc:creator>
      <dc:date>2019-11-18T20:59:11Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27539#M19405</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi @Tyler Tamasauckas,&lt;/P&gt;&lt;P&gt;Please try as max(df("column_name")) please have look at below blog post regarding max function&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.programcreek.com/scala/org.apache.spark.sql.functions.max" target="_blank"&gt;https://www.programcreek.com/scala/org.apache.spark.sql.functions.max&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2019 07:15:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27539#M19405</guid>
      <dc:creator>shyam_9</dc:creator>
      <dc:date>2019-11-19T07:15:12Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27540#M19406</link>
      <description>&lt;P&gt;Hi @Tyler Tamasauckas​&amp;nbsp;, &lt;/P&gt;&lt;P&gt;I was also facing same issue with the sql functions 'upper' and ‘hash’. &lt;/P&gt;&lt;P&gt;In the jar we have to call SparkSession.builder().getOrCreate() or SparkContext.getOrCreate() API to get the spark/sparkcontext instance.&lt;/P&gt;&lt;P&gt;In the jar if we use object and main() method approach, upon using for the first time it works fine, later on it is somehow .. strangely losing the instance. Don't know the exact reason for that. &lt;/P&gt;&lt;P&gt;&lt;B&gt;The work around is to use “object .. extends App” approach&lt;/B&gt; in the jar, then it is working.&lt;/P&gt;&lt;P&gt;The App trait approach is taking 10 seconds more time when compared to object with main method. This is for the first time only, that too for the first activity. It is because the App trait uses delayed initialization feature. Applies to all Scala Applications.&lt;/P&gt;&lt;P&gt;If we still need to use main method approach, define spark instance as implicit and use that implicit wherever we use that instance.&lt;/P&gt;&lt;P&gt;e.g. &lt;/P&gt;&lt;P&gt;object SomeName {&lt;/P&gt;&lt;P&gt;def UserDefinedMethod(query:String)(implicit spark:SparkSession) = {spark.sql(query)} // This UserDefinedMethod gets spark implicitly.&lt;/P&gt;&lt;P&gt; def main(args: Array[String]): Unit = {&lt;/P&gt;&lt;P&gt; implicit val spark = SparkSession.builder().getOrCreate()&lt;/P&gt;&lt;P&gt; spark…&lt;/P&gt;&lt;P&gt; }&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;Note: Object extends App will get the arguments from Scala 2.9 onward.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Feb 2020 11:50:53 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27540#M19406</guid>
      <dc:creator>omprakash_scala</dc:creator>
      <dc:date>2020-02-27T11:50:53Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27541#M19407</link>
      <description>&lt;P&gt;Hi, @omprakash.scala@gmail.com​&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Could you please tell more about the issue you had and its solution?&lt;/P&gt;&lt;P&gt;We now have a similar problem, a job failed on the second run with the exception "Undefined function: to_unix_timestamp. This function is neither a built-in/temporary function..." and the only fix is to restart the cluster, I tried to change my main class to "object ... extends App" approach but it still didn't work.&lt;/P&gt;&lt;P&gt;I searched over the internet and found this post is the only possible clue, looking forward for your response.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Chen&lt;/P&gt;</description>
      <pubDate>Mon, 16 May 2022 13:54:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27541#M19407</guid>
      <dc:creator>Windoze</dc:creator>
      <dc:date>2022-05-16T13:54:08Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27542#M19408</link>
      <description>&lt;P&gt;I am facing similar issue when trying to use from_utc_timestamp function. I am able to call the function from databricks notebook but when I use the same function inside my java jar and running as a job in databricks, it is giving below error. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;AnalysisException: Undefined function: from_utc_timestamp. This function is neither a built-in/temporary function, nor a persistent function that is qualified as spark_catalog.default.from_utc_timestamp.;&lt;/P&gt;</description>
      <pubDate>Wed, 12 Oct 2022 07:57:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/27542#M19408</guid>
      <dc:creator>skaja</dc:creator>
      <dc:date>2022-10-12T07:57:55Z</dc:date>
    </item>
    <item>
      <title>Re: org.apache.spark.sql.AnalysisException: Undefined function: 'MAX'</title>
      <link>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/83068#M36836</link>
      <description>&lt;P&gt;Did you find the solution?&lt;/P&gt;</description>
      <pubDate>Thu, 15 Aug 2024 09:26:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/org-apache-spark-sql-analysisexception-undefined-function-max/m-p/83068#M36836</guid>
      <dc:creator>iwxshubham</dc:creator>
      <dc:date>2024-08-15T09:26:31Z</dc:date>
    </item>
  </channel>
</rss>

