databricks-connect error when executing sparkml
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-04-2022 06:40 AM
I use databricks-connect, and spark jobs related spark dataframe works good. But, when I trigger spark ml code, I am getting errors.
For example, after executing in the code: https://docs.databricks.com/_static/notebooks/gbt-regression.html
pipelineModel = pipeline.fit(train)
22/11/04 09:28:15 ERROR Instrumentation: java.io.IOException: unexpected exception type
at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1750)
at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1280)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
---------------------------
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
---------------------------
Caused by: java.lang.IllegalArgumentException: Illegal lambda deserialization
at scala.runtime.LambdaDeserializer$.makeCallSite$1(LambdaDeserializer.scala:89)
at scala.runtime.LambdaDeserializer$.deserializeLambda(LambdaDeserializer.scala:114)
at scala.runtime.LambdaDeserialize.deserializeLambda(LambdaDeserialize.java:38)
---------------------------
py4j.protocol.Py4JJavaError: An error occurred while calling o806.fit.
: java.io.IOException: unexpected exception type
at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1750)
at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1280)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
---------------------------
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
---------------------------
Caused by: java.lang.IllegalArgumentException: Illegal lambda deserialization
at scala.runtime.LambdaDeserializer$.makeCallSite$1(LambdaDeserializer.scala:89)
at scala.runtime.LambdaDeserializer$.deserializeLambda(LambdaDeserializer.scala:114)
at scala.runtime.LambdaDeserialize.deserializeLambda(LambdaDeserialize.java:38)
Does anyone know how to fix it?
- Labels:
-
Databricks-connect
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-11-2022 03:25 PM
Hi @Kaniz Fatmaโ, I am using 10.4.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-26-2022 03:59 PM
I'm encountering the exact same problem. I'm also using databricks connect 10.4.12. Our models ran in production pipeline are doing fine because they are ran using the Databricks UI, and not databricks-connect. However, in our testing CI pipeline they are ran using databricks-connect in docker containers (using Concourse-CI). The codebase are the same. When I try to run the same code manually on my local machine connected to our cluster via databricks-connect I run into the same problem with Troy here.
In fact, I tried to run a very minimal random forest classifier and I STILL run into the same problem. Here are the code I use:
import numpy as np
import pandas as pd
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.getOrCreate()
data = spark.createDataFRame(
pd.DataFrame({
"feature_a": np.random.random(100),
"feature_b": np.random.random(100),
"feature_c": np.random.random(100),
"label": np.random.choice([0, 1], 100)
})
vector_assembler = VectorAssembler(
inputCols=[f"feature_{n}" for n in ["a", "b", "c"],
outputCol="features",
)
parsed_data = (
vector_assembler
.transform(data)
.drop(*[f"feature_{n}" for n in ["a", "b", "c"])
)
model = RandomForestClassifier()
model.fit(parsed_data)
# Error thrown here, very similar to Troy's.
I'm attaching my error output as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ12-01-2022 08:05 AM
@Kaniz Fatmaโ any pointers at all?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-08-2023 12:16 AM
Hello,
Same problem, here in France.
@Kaniz Fatmaโ Can we have some answers?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-09-2023 01:37 AM
Good morning,
For information the error is not at all related to the limitations of databricks connect.
After various tests, in my case, it turns out that it is necessary to update the libraries of the venv used with databricks connect.
Here are the python library updates I made:
- databricks-connect from 10.4.12 to 10.4.21
- databricks-cli from 0.17.3 to 0.17.4
- mlflow from 1.26.1 to 2.2.1
- protobuf from 3.20.0 to 3.20.3
Note that I work with a 10.4 lts cluster
After these updates, the code example above works fine on intelliJ with databricks connect
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-10-2023 03:00 AM
For information, upgrading python libraries does not resolve all problems.
This code works fine on databricks in a notebook :
import mlflow
model = mlflow.spark.load_model('runs:/cb6ff62587a0404cabeadd47e4c9408a/model')
Whereas it failed on intelliJ with databricks-connect
Did you have any solution ?