cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Error in SQL statement: AnalysisException: The query operator `UpdateCommandEdge` contains one or more unsupported expression types Aggregate, Window or Generate.

pc
New Contributor II

com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException:

The query operator `UpdateCommandEdge` contains one or more unsupported

expression types Aggregate, Window or Generate.

Invalid expressions: [avg(spark_catalog.eds_us_lake_cdp.cdp_job_log.Duration) OVER (PARTITION BY spark_catalog.eds_us_lake_cdp.cdp_job_log.job_id ORDER BY spark_catalog.eds_us_lake_cdp.cdp_job_log.Job_Start_Date_Time ASC NULLS FIRST RANGE BETWEEN INTERVAL '29' DAY PRECEDING AND CURRENT ROW), avg(spark_catalog.eds_us_lake_cdp.cdp_job_log.Duration)];

UpdateCommandEdge Delta[version=433, s3://tpc-aws-ted-dev-edpp-lake-cdp-us-east-1/eds_us_lake_cdp/cdp_job_log/delta], [Job_Run_Id#10299, Job_Id#10300, Batch_Run_Id#10301, Tidal_Job_No#10302, Source_Layer#10303, Source_Object_Location#10304, Source_Object_Name#10305, Target_Layer#10306, Target_Object_Location#10307, Target_Object_Name#10308, Status#10309, Status_Source#10310, Step_Control_Log#10311, Job_Scheduled_Date_Time#10312, Job_Start_Date_Time#10313, Job_End_Date_Time#10314, Error_Description#10315, Source_Record_Count#10316, Target_Record_Count#10317, MD5_HASH#10318, User_Id#10319, Created_Date_Time#10320, Duration#10321, round(avg(Duration#10321) windowspecdefinition(job_id#10300, Job_Start_Date_Time#10313 ASC NULLS FIRST, specifiedwindowframe(RangeFrame, -INTERVAL '29' DAY, currentrow$())), 2)]

+- SubqueryAlias spark_catalog.eds_us_lake_cdp.cdp_job_log

+- Relation eds_us_lake_cdp.cdp_job_log[Job_Run_Id#10299,Job_Id#10300,Batch_Run_Id#10301,Tidal_Job_No#10302,Source_Layer#10303,Source_Object_Location#10304,Source_Object_Name#10305,Target_Layer#10306,Target_Object_Location#10307,Target_Object_Name#10308,Status#10309,Status_Source#10310,Step_Control_Log#10311,Job_Scheduled_Date_Time#10312,Job_Start_Date_Time#10313,Job_End_Date_Time#10314,Error_Description#10315,Source_Record_Count#10316,Target_Record_Count#10317,MD5_HASH#10318,User_Id#10319,Created_Date_Time#10320,Duration#10321,Average_Run#10322] parquet

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.failAnalysis(CheckAnalysis.scala:60)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.failAnalysis$(CheckAnalysis.scala:59)

at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:221)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$2(CheckAnalysis.scala:623)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$2$adapted(CheckAnalysis.scala:105)

at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:358)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:105)

at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:100)

at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:100)

at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:221)

at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:275)

at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:331)

at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:272)

at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:128)

at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)

at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:268)

at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:265)

at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)

at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:265)

at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:129)

at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:126)

at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:118)

at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:103)

at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)

at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:101)

at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:803)

at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)

at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:798)

at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:695)

at com.databricks.backend.daemon.driver.SQLDriverLocal.$anonfun$executeSql$1(SQLDriverLocal.scala:91)

at scala.collection.immutable.List.map(List.scala:297)

at com.databricks.backend.daemon.driver.SQLDriverLocal.executeSql(SQLDriverLocal.scala:37)

at com.databricks.backend.daemon.driver.SQLDriverLocal.repl(SQLDriverLocal.scala:145)

at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$13(DriverLocal.scala:634)

at com.databricks.logging.Log4jUsageLoggingShim$.$anonfun$withAttributionContext$1(Log4jUsageLoggingShim.scala:33)

at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)

at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)

at com.databricks.logging.Log4jUsageLoggingShim$.withAttributionContext(Log4jUsageLoggingShim.scala:31)

at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:205)

at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:204)

at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:59)

at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:240)

at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:225)

at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:59)

at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:611)

at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:615)

at scala.util.Try$.apply(Try.scala:213)

at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:607)

at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:526)

at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:561)

at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:431)

at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:374)

at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:225)

at java.lang.Thread.run(Thread.java:748)

at com.databricks.backend.daemon.driver.SQLDriverLocal.executeSql(SQLDriverLocal.scala:130)

at com.databricks.backend.daemon.driver.SQLDriverLocal.repl(SQLDriverLocal.scala:145)

at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$13(DriverLocal.scala:634)

at com.databricks.logging.Log4jUsageLoggingShim$.$anonfun$withAttributionContext$1(Log4jUsageLoggingShim.scala:33)

at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)

at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)

at com.databricks.logging.Log4jUsageLoggingShim$.withAttributionContext(Log4jUsageLoggingShim.scala:31)

at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:205)

at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:204)

at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:59)

at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:240)

at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:225)

at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:59)

at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:611)

at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:615)

at scala.util.Try$.apply(Try.scala:213)

at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:607)

at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:526)

at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:561)

at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:431)

at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:374)

at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:225)

at java.lang.Thread.run(Thread.java:748)

4 REPLIES 4

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi, The aggregation is not supported. Also, could you provide the ask here and also a context of the environment?

pc
New Contributor II

update eds_us_lake_cdp.cdp_job_log set Average_Run = round(avg(Duration) OVER(partition by job_id ORDER BY Job_Start_Date_Time RANGE BETWEEN INTERVAL 29 DAY PRECEDING AND CURRENT ROW), 2)

I have this query but it's thhrowing error. Anything we can do on it.

pc
New Contributor II

Spark version is 3.2.1

Anonymous
Not applicable

Hi @Pradeep Chauhan​ 

Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance! 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.