Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

ModuleNotFoundError when using foreachBatch on runtime 14 with Unity

New Contributor III

Recently we have run into an issue using foreachBatch after upgrading our Databricks cluster on Azure to a runtime version 14 with Spark 3.5 with Shared access mode and Unity catalogue.
The issue was manifested by ModuleNotFoundError error being thrown whenever we call a function from foreachBatch, which uses an object, which is not declared within the scope of a given function, but it is declared in another module.

SparkConnectGrpcException: (org.apache.spark.api.python.StreamingPythonRunner$StreamingPythonRunnerInitializationException) 
[STREAMING_PYTHON_RUNNER_INITIALIZATION_FAILURE] Streaming Runner initialization failed, returned -2.
Cause: Traceback (most recent call last): File "/databricks/spark/python/pyspark/", line 193,
in _read_with_length return self.loads(obj) File "/databricks/spark/python/pyspark/", line 571,
in loads return cloudpickle.loads(obj, encoding=encoding) ModuleNotFoundError: No module named 'foreach_batch_test'

So, after banging my head against the wall for some time, I finally acknowledged that this could be a bug in Databricks.
While compiling the report, everything started to work again today??
Can anyone provide some details about what happened?
Cheers, thanks


Esteemed Contributor

Which DBR are you using? I mean, exactly.
To use foreachBatch in shared clusters you need at least 14.2

New Contributor III

Hi @daniel_sahal, thanks for getting back.
We are using 14.3, Spark 3.5.0, Scala 2.12

Esteemed Contributor

Okay, DBR version should not be an issue then.
Could you share a code snippet here?

New Contributor III

Below you can find the minimal code to reproduce the scenario. which used to cause the error.
Do remember that this suddenly started to work as expected, while it used to fail prior to me posting this topic.
In any case, a few words on what we are doing.
We need streaming query to be processed using the provided function in foreachBatch, where this function should be configurable (i.e. we need to pass an object with some configuration args to it).

In the below example we simulate this by using higher order function which takes an instance of SomeConfiguration. 


from pyspark.sql import SparkSession, DataFrame
from pyspark.sql.functions import col

class SomeConfiguration():    
    def __init__(self, name: str): = name

def process_batch(config: SomeConfiguration):
    def say_hello_foreach_microbatch(micro_batch_df: DataFrame, micro_batch_id):
        print(f"Hello {}!")
            f"The batch {micro_batch_id} has {micro_batch_df.count()} items.")      

    return say_hello_foreach_microbatch

def main():
    spark = SparkSession.builder.getOrCreate()

    data_stream = (
        .option("readChangeFeed", "true")
        .option("ignoreChanges", "true")
        .filter(col("status") == "Staged")
        .filter(col("_change_type") == "insert")

    data_stream.writeStream \
        ) \
        .foreachBatch(process_batch(SomeConfiguration("Johnny"))) \
        .outputMode("append") \
        .trigger(availableNow=True) \

if __name__ == '__main__':


The above code used to fail on a line, which actually references the 
an instance of SomeConfiguration object, i.e. print(f"Hello {}!") inside say_hello_foreach_microbatch function.

Same code started to work fine all of a sudden, despite the fact that there were no obvious changes to a cluster and definitely no changes to our code.

I was just curious if anyone new anything.

In this case it went from bad to better, but I am bit concerned if cluster can change behaviour without our control nor any official release from good to bad.


New Contributor III

@mjar I have exactly the same issue... found any solution meanwhile?

New Contributor III

Hi, @Nastia unfortunately I don't have any answers yet. 

I have one channel opened with Databricks though, but no news yet.
On plus side (well for us) the workflows still work as expected since the magic fix occurred in our environments.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!