cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

ModuleNotFoundError when using foreachBatch on runtime 14 with Unity

mjar
New Contributor II

Recently we have run into an issue using foreachBatch after upgrading our Databricks cluster on Azure to a runtime version 14 with Spark 3.5 with Shared access mode and Unity catalogue.
The issue was manifested by ModuleNotFoundError error being thrown whenever we call a function from foreachBatch, which uses an object, which is not declared within the scope of a given function, but it is declared in another module.

SparkConnectGrpcException: (org.apache.spark.api.python.StreamingPythonRunner$StreamingPythonRunnerInitializationException) 
[STREAMING_PYTHON_RUNNER_INITIALIZATION_FAILURE] Streaming Runner initialization failed, returned -2.
Cause: Traceback (most recent call last): File "/databricks/spark/python/pyspark/serializers.py", line 193,
in _read_with_length return self.loads(obj) File "/databricks/spark/python/pyspark/serializers.py", line 571,
in loads return cloudpickle.loads(obj, encoding=encoding) ModuleNotFoundError: No module named 'foreach_batch_test'

So, after banging my head against the wall for some time, I finally acknowledged that this could be a bug in Databricks.
While compiling the report, everything started to work again today??
Can anyone provide some details about what happened?
Cheers, thanks

4 REPLIES 4

daniel_sahal
Esteemed Contributor

@mjar 
Which DBR are you using? I mean, exactly.
To use foreachBatch in shared clusters you need at least 14.2

mjar
New Contributor II

Hi @daniel_sahal, thanks for getting back.
We are using 14.3, Spark 3.5.0, Scala 2.12

daniel_sahal
Esteemed Contributor

@mjar 
Okay, DBR version should not be an issue then.
Could you share a code snippet here?

mjar
New Contributor II

 
Below you can find the minimal code to reproduce the scenario. which used to cause the error.
Do remember that this suddenly started to work as expected, while it used to fail prior to me posting this topic.
In any case, a few words on what we are doing.
We need streaming query to be processed using the provided function in foreachBatch, where this function should be configurable (i.e. we need to pass an object with some configuration args to it).

In the below example we simulate this by using higher order function which takes an instance of SomeConfiguration. 

 

from pyspark.sql import SparkSession, DataFrame
from pyspark.sql.functions import col

class SomeConfiguration():    
    def __init__(self, name: str):
        self.name = name

def process_batch(config: SomeConfiguration):
    def say_hello_foreach_microbatch(micro_batch_df: DataFrame, micro_batch_id):
        print(f"Hello {config.name}!")
        print(
            f"The batch {micro_batch_id} has {micro_batch_df.count()} items.")      

    return say_hello_foreach_microbatch


def main():
    spark = SparkSession.builder.getOrCreate()

    data_stream = (
        spark.readStream.format("delta")
        .option("readChangeFeed", "true")
        .option("ignoreChanges", "true")
        .table("SOME_DELTA_TABLE")
        .filter(col("status") == "Staged")
        .filter(col("_change_type") == "insert")
    )

    data_stream.writeStream \
        .option(
            "checkpointLocation",
            f"SOME_CHECK_POINT_LOCATION",
        ) \
        .foreachBatch(process_batch(SomeConfiguration("Johnny"))) \
        .outputMode("append") \
        .trigger(availableNow=True) \
        .start()\
        .awaitTermination()


if __name__ == '__main__':
    main()

 

The above code used to fail on a line, which actually references the 
an instance of SomeConfiguration object, i.e. print(f"Hello {config.name}!") inside say_hello_foreach_microbatch function.

Same code started to work fine all of a sudden, despite the fact that there were no obvious changes to a cluster and definitely no changes to our code.

I was just curious if anyone new anything.

In this case it went from bad to better, but I am bit concerned if cluster can change behaviour without our control nor any official release from good to bad.

 

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!