Hi @DataBricks team,
I'm exploring ways to enable dynamic reusability of common user defined functions across multiple notebooks in a DLT (Delta Live Tables) pipeline. The goal is to avoid duplicating code and maintain a centralized location for commonly used UDFs that can be imported or referenced dynamically.
we tried "import sys" (i mentioned code below) but which is also not supporting.
can you please help me with better way to solve the issue.
please let us know how can we implement a similar approach or an effective pattern to manage and reuse UDFs across notebooks within DLT workflows?
import sys
sys.path.append("file_path.py")
from Functions import get_schema_for_action, map_yaml_to_pyspark_type, load_config