<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: UDFs with modular code - INVALID_ARGUMENT in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/udfs-with-modular-code-invalid-argument/m-p/119258#M45823</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/115254"&gt;@Zeruno&lt;/a&gt;. What you can do is to package up your code and pip install in your pipeline. I had the same situation where I developed some code which ran fine in a notebook, but when used in a DLT pipeline, the deps were not found. Packaging them up and then installing them via pip in the beginning of my notebook allowed me to pull in my code and custom udfs.&lt;/P&gt;</description>
    <pubDate>Wed, 14 May 2025 23:12:04 GMT</pubDate>
    <dc:creator>briceg</dc:creator>
    <dc:date>2025-05-14T23:12:04Z</dc:date>
    <item>
      <title>UDFs with modular code - INVALID_ARGUMENT</title>
      <link>https://community.databricks.com/t5/data-engineering/udfs-with-modular-code-invalid-argument/m-p/82578#M36692</link>
      <description>&lt;DIV class=""&gt;&lt;P&gt;I am migrating a massive codebase to Pyspark on Azure Databricks,using DLT Pipelines. It is very important that code will be modular, that is I am looking to make use of UDFs for the timebeing that use modules and classes.&lt;/P&gt;&lt;P&gt;I am receiving the following error:&lt;/P&gt;&lt;PRE&gt;org.apache.spark.SparkRuntimeException: [UDF_ERROR.PAYLOAD] Execution of function &amp;lt;&lt;SPAN class=""&gt;lambda&lt;/SPAN&gt;&amp;gt;(MYCOLUMN_NAME1531) 
&lt;SPAN class=""&gt;2&lt;/SPAN&gt;) failed - failed to &lt;SPAN class=""&gt;set&lt;/SPAN&gt; payload
== Error ==
INVALID_ARGUMENT: No module named &lt;SPAN class=""&gt;'mymodule'&lt;/SPAN&gt;
== Stacktrace ==&lt;/PRE&gt;&lt;P&gt;With the following code (anonymized to create a minimum working example):&lt;/P&gt;&lt;PRE&gt;&lt;SPAN class=""&gt;# demo.py&lt;/SPAN&gt;
&lt;SPAN class=""&gt;from&lt;/SPAN&gt; pyspark.sql.functions &lt;SPAN class=""&gt;import&lt;/SPAN&gt; col
&lt;SPAN class=""&gt;import&lt;/SPAN&gt; dlt 
&lt;SPAN class=""&gt;import&lt;/SPAN&gt; mymodule

demodata = mymodule.DemoData(&lt;SPAN class=""&gt;"EX"&lt;/SPAN&gt;)
helper = mymodule.Helper(demodata)

&lt;SPAN class=""&gt;@dlt.table(&lt;SPAN class=""&gt;name=&lt;SPAN class=""&gt;"DEMO"&lt;/SPAN&gt;&lt;/SPAN&gt;)&lt;/SPAN&gt;
&lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;table&lt;/SPAN&gt;():
    &lt;SPAN class=""&gt;return&lt;/SPAN&gt; (
spark.readStream.&lt;SPAN class=""&gt;format&lt;/SPAN&gt;(&lt;SPAN class=""&gt;"cloudFiles"&lt;/SPAN&gt;)
.option(&lt;SPAN class=""&gt;"cloudFiles.Format"&lt;/SPAN&gt;, &lt;SPAN class=""&gt;"PARQUET"&lt;/SPAN&gt;)
.load(&lt;SPAN class=""&gt;"abfss://..."&lt;/SPAN&gt;)
.withColumn(&lt;SPAN class=""&gt;"DEMO"&lt;/SPAN&gt;, helper.transform(col(&lt;SPAN class=""&gt;"MYCOLUMN_NAME"&lt;/SPAN&gt;)))
)


&lt;SPAN class=""&gt;# mymodule.py&lt;/SPAN&gt;
&lt;SPAN class=""&gt;from&lt;/SPAN&gt; pyspark.sql.typos &lt;SPAN class=""&gt;import&lt;/SPAN&gt; StringType
&lt;SPAN class=""&gt;from&lt;/SPAN&gt; pyspark.sql.functions &lt;SPAN class=""&gt;import&lt;/SPAN&gt; udf

&lt;SPAN class=""&gt;class&lt;/SPAN&gt; &lt;SPAN class=""&gt;DemoData&lt;/SPAN&gt;:
    &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;__init__&lt;/SPAN&gt;(&lt;SPAN class=""&gt;self, suffix&lt;/SPAN&gt;)
        self.suffix = suffix

&lt;SPAN class=""&gt;class&lt;/SPAN&gt; &lt;SPAN class=""&gt;Helper&lt;/SPAN&gt;:
    &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;__init__&lt;/SPAN&gt;(&lt;SPAN class=""&gt;self, demoData&lt;/SPAN&gt;):
        _suffix = demoData.suffix
        self.transform = udf(&lt;SPAN class=""&gt;lambda&lt;/SPAN&gt; _string: self.helper(_string, _suffix), StringType())

&lt;SPAN class=""&gt;    @staticmethod&lt;/SPAN&gt;
    &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;helper&lt;/SPAN&gt;(&lt;SPAN class=""&gt;string, suffix&lt;/SPAN&gt;):
        &lt;SPAN class=""&gt;return&lt;/SPAN&gt; string + suffix&lt;BR /&gt;&lt;BR /&gt;#dlt&lt;/PRE&gt;&lt;P&gt;Can someone help me understand what is happening? I am thinking that the Spark Worker cannot see my module. Is this correct? How would I use UDFs with modular code? I understand that this might not be ideal, but I want to understand this technicality.&lt;/P&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 09 Aug 2024 15:42:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/udfs-with-modular-code-invalid-argument/m-p/82578#M36692</guid>
      <dc:creator>Zeruno</dc:creator>
      <dc:date>2024-08-09T15:42:27Z</dc:date>
    </item>
    <item>
      <title>Re: UDFs with modular code - INVALID_ARGUMENT</title>
      <link>https://community.databricks.com/t5/data-engineering/udfs-with-modular-code-invalid-argument/m-p/119258#M45823</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/115254"&gt;@Zeruno&lt;/a&gt;. What you can do is to package up your code and pip install in your pipeline. I had the same situation where I developed some code which ran fine in a notebook, but when used in a DLT pipeline, the deps were not found. Packaging them up and then installing them via pip in the beginning of my notebook allowed me to pull in my code and custom udfs.&lt;/P&gt;</description>
      <pubDate>Wed, 14 May 2025 23:12:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/udfs-with-modular-code-invalid-argument/m-p/119258#M45823</guid>
      <dc:creator>briceg</dc:creator>
      <dc:date>2025-05-14T23:12:04Z</dc:date>
    </item>
  </channel>
</rss>

