cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Getting databricks-connect com.fasterxml.jackson.databind.exc.MismatchedInputException parse warning

Surajv
New Contributor III

Hi community, 

I am getting below warning when I try using pyspark code for some of my use-cases using databricks-connect. 

Is this a critical warning, and any idea what does it mean?

Logs: 
WARN DatabricksConnectConf: Could not parse /root/.databricks-connect
INFO - WARN DatabricksConnectConf: Could not parse /root/.databricks-connect
INFO - com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input
INFO - at [Source: (String)""; line: 1, column: 0]
INFO - at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
INFO - at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4765)
INFO - at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4667)
INFO - at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3629)
INFO - at com.databricks.spark.util.DatabricksConnectConf$.liftedTree1$1(DatabricksConnectConf.scala:68)
INFO - at com.databricks.spark.util.DatabricksConnectConf$.jsonConfig$lzycompute(DatabricksConnectConf.scala:64)
INFO - at com.databricks.spark.util.DatabricksConnectConf$.jsonConfig(DatabricksConnectConf.scala:56)
INFO - at com.databricks.spark.util.DatabricksConnectConf$.getToken(DatabricksConnectConf.scala:83)
INFO - at com.databricks.sql.DatabricksSQLConf$.$anonfun$SPARK_SERVICE_TOKEN$1(DatabricksSQLConf.scala:2591)
INFO - at org.apache.spark.internal.config.ConfigEntryWithDefaultFunction.defaultValueString(ConfigEntry.scala:181)
INFO - at org.apache.spark.sql.internal.SQLConf.$anonfun$getAllDefaultConfs$1(SQLConf.scala:5478)
INFO - at scala.collection.Iterator.foreach(Iterator.scala:943)
INFO - at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
INFO - at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
INFO - at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
INFO - at org.apache.spark.sql.internal.SQLConf.getAllDefaultConfs(SQLConf.scala:5476)
INFO - at org.apache.spark.sql.internal.SQLConf.recordNonDefaultConfs(SQLConf.scala:5490)
INFO - at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:191)
INFO - at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:430)
INFO - at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:187)
INFO - at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1038)
INFO - at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:129)
INFO - at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:380)
INFO - at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:237)
INFO - at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:220)
INFO - at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:233)
QueryExecution.scala:226)
INFO - at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)
INFO - at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106)
INFO - at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)
INFO - at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
INFO - at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:316)
INFO - at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:312)
INFO - at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
INFO - at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)
INFO - at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:226)
INFO - at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:372)
INFO - at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:226)
INFO - at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:180)
INFO - at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:171)
INFO - at org.apache.spark.sql.Dataset.<init>(Dataset.scala:250)
INFO - at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:101)
INFO - at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1038)
INFO - at org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1045)
INFO - at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
INFO - at org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1045)
INFO - at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98)
INFO - at org.apache.spark.sql.Dataset.$anonfun$org$apache$spark$sql$Dataset$$withPlan$1(Dataset.scala:4414)
INFO - at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
INFO - at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:4414)
INFO - at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:4408)
INFO - at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:3858)
INFO - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO - at java.lang.reflect.Method.invoke(Method.java:498)
INFO - at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
INFO - at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
INFO - at py4j.Gateway.invoke(Gateway.java:306)
INFO - at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
INFO - at py4j.commands.CallCommand.execute(CallCommand.java:79)
INFO - at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
INFO - at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
INFO - at java.lang.Thread.run(Thread.java:750)

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi, @Surajv, The warning you’re encountering is related to using Databricks Connect with PySpark. 

  1. Databricks Connect: Databricks Connect is a Python library that allows you to connect your local development environment to a Databricks cluster. It enables you to run Spark code locally while still leveraging the resources of a remote Databricks cluster.

  2. The Warning Message: The warning message you’re seeing indicates that there’s an issue with parsing the configuration file for Databricks Connect. Specifically, it’s failing to parse the file located at /root/.databricks-connect.

  3. Impact and Criticality:

    • The warning itself doesn’t necessarily indicate a critical problem. However, it does imply that Databricks Connect might not be configured correctly.
    • If you’re experiencing issues with your Spark code execution or if you’re unable to establish a connection to the Databricks cluster, this warning could be a contributing factor.
  4. Troubleshooting Steps:

  5. Common Causes:

    • Incorrectly formatted JSON in the configuration file.
    • Missing required parameters (e.g., cluster URL, token, etc.).
    • Permissions issues preventing access to the file.

Good luck! 😊