cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

java.util.NoSuchElementException: key not found

hitesh1
New Contributor III

Hello,

We are using a Azure Databricks with Standard DS14_V2 Cluster with Runtime 9.1 LTS, Spark 3.1.2 and Scala 2.12 and facing the below issue frequently when running our ETL pipeline. As part of the operation that is failing there are several joins happening with delta tables and the output of that is being written into another delta table.

There was a similar issue reported earlier and was fixed as part of DB runtime 9.0 (unsupported) : [SPARK-34000] ExecutorAllocationListener threw an exception java.util.NoSuchElementException - ASF...

Here is the documentation for that : Databricks Runtime 9.0 (Unsupported) - Azure Databricks | Microsoft Docs

Is the below exception an unrelated issue from the one reported in the bug mentioned above and should I be logging another bug for this to be fixed? We are facing this issue every few runs of our daily job. Any help or pointers would be greatly appreciated

Here is the exception with the stack trace : I have uploaded a file with the full stack trace because of the size limit

An error occurred while calling o74209.insertInto.

: java.util.NoSuchElementException: key not found: Project [none#1 AS #0, none#4, none#18 AS #1, none#13, none#12, none#3, CASE WHEN isnull(none#19) THEN -25567 ELSE cast(gettimestamp(none#19, yyyy-MM-dd, Some(Etc/UTC), false) as date) END AS #2, none#20, none#0 AS #3, (none#3 = '') AS #4]

+- Relation[none#0,none#1,none#2,none#3,none#4,none#5,none#6,none#7,none#8,none#9,none#10,none#11,none#12,none#13,none#14,none#15,none#16,none#17,none#18,none#19,none#20,none#21,none#22,none#23,... 5 more fields] parquet

at scala.collection.MapLike.default(MapLike.scala:235)

at scala.collection.MapLike.default$(MapLike.scala:234)

at scala.collection.AbstractMap.default(Map.scala:63)

at scala.collection.MapLike.apply(MapLike.scala:144)

at scala.collection.MapLike.apply$(MapLike.scala:143)

at scala.collection.AbstractMap.apply(Map.scala:63)

at com.databricks.sql.transaction.tahoe.stats.PrepareDeltaScan$$anonfun$prepareDeltaScanParallel$1.applyOrElse(PrepareDeltaScan.scala:229)

at com.databricks.sql.transaction.tahoe.stats.PrepareDeltaScan$$anonfun$prepareDeltaScanParallel$1.applyOrElse(PrepareDeltaScan.scala:227)

at org.apache.spark.sql.catalyst.plans.QueryPlan$$anon$2.apply(QueryPlan.scala:545)

at org.apache.spark.sql.catalyst.plans.QueryPlan$$anon$2.apply(QueryPlan.scala:541)

at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)

at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)

at org.apache.spark.sql.catalyst.plans.QueryPlan$$anon$2.applyOrElse(QueryPlan.scala:541)

at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:484)

at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:86)

at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:484)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:29)

Thanks & Regards,

Hitesh

1 REPLY 1

Aviral-Bhardwaj
Esteemed Contributor III

Hey man,

Please use these configuration in your cluster and it will work,

spark.sql.storeAssignmentPolicy LEGACY

spark.sql.parquet.binaryAsString true

spark.speculation false

spark.sql.legacy.timeParserPolicy LEGACY

if it wont work let me know what problem you are facing there,we can discuss

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.