cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

facing issue mentioned in body when connecting event hub with databricks , followed earlier discussion on this but no solution

guru1
New Contributor II

ERROR: Query termination received for [id=37bada03-131b-4fbb-8992-a427263fef2c, runId=cf3d7c18-780e-43ae-aed0-9daf2939b823], with exception: java.lang.IllegalArgumentException: Input byte array has wrong 4-byte ending unit

at java.util.Base64$Decoder.decode0(Base64.java:704)

at java.util.Base64$Decoder.decode(Base64.java:526)

at java.util.Base64$Decoder.decode(Base64.java:549)

at org.apache.spark.eventhubs.EventHubsUtils$.decrypt(EventHubsUtils.scala:169)

at org.apache.spark.eventhubs.EventHubsConf$.toConf(EventHubsConf.scala:628)

at org.apache.spark.sql.eventhubs.EventHubsSource.<init>(EventHubsSource.scala:84)

at org.apache.spark.sql.eventhubs.EventHubsSourceProvider.createSource(EventHubsSourceProvider.scala:82)

at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:324)

at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.$anonfun$applyOrElse$1(MicroBatchExecution.scala:138)

at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)

at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:135)

at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:133)

at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)

at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:99)

at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)

at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:517)

at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1174)

at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1173)

at org.apache.spark.sql.catalyst.plans.logical.OrderPreservingUnaryNode.mapChildren(LogicalPlan.scala:254)

at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:517)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)

at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)

at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:517)

at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1174)

at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1173)

at org.apache.spark.sql.catalyst.plans.logical.OrderPreservingUnaryNode.mapChildren(LogicalPlan.scala:254)

at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:517)

at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)

1 ACCEPTED SOLUTION

Accepted Solutions

Annapurna_Hiriy
Databricks Employee
Databricks Employee

The issue could be due to the mismatch in the eventHub jar and the dependencies added. Also, not all the required dependencies may be added.

Suggestions:

Using the azure_eventhubs_spark_2_12_.jar eventHub spark jar along with the following dependencies should solve the issue:

  • azure_eventhubs_3_2_0.jar
  • scala_java8_compat_2_12_0_9_1.jar
  • proton_j_0_33_6.jar

View solution in original post

2 REPLIES 2

Aviral-Bhardwaj
Esteemed Contributor III

can you share driver logs also,this way I can check and help you

AviralBhardwaj

Annapurna_Hiriy
Databricks Employee
Databricks Employee

The issue could be due to the mismatch in the eventHub jar and the dependencies added. Also, not all the required dependencies may be added.

Suggestions:

Using the azure_eventhubs_spark_2_12_.jar eventHub spark jar along with the following dependencies should solve the issue:

  • azure_eventhubs_3_2_0.jar
  • scala_java8_compat_2_12_0_9_1.jar
  • proton_j_0_33_6.jar

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group