โ08-01-2022 12:37 AM
I set up a workflow using 2 tasks. Just for demo purposes, I'm using an interactive cluster for running the workflow.
{
"task_key": "prepare",
"spark_python_task": {
"python_file": "file:/Workspace/Repos/devops/mlhub-mlops-dev/src/src/prepare_train.py",
"parameters": [
"/dbfs/raw",
"/dbfs/train",
"/dbfs/train"
]
},
"existing_cluster_id": "XXXX-XXXXXX-XXXXXXXXX",
"timeout_seconds": 0,
"email_notifications": {}
}
As stated in the documentation, I set up the environment variable in the cluster ... this is the excerpt of the json definition of the cluster:
"spark_env_vars": {
"PYSPARK_PYTHON": "/databricks/python3/bin/python3",
"PYTHONPATH": "/Workspace/Repos/devops/mlhub-mlops-dev/src"
}
Then, when I execute the task of type Python, and I logged the contents of the sys.path I can't find the path configured in the cluster. If I log the contents of os.getenv('PYTHONPATH'), I get nothing. It looks like the environment variables set up at cluster level are not being promoted to the python task
โ08-03-2022 12:56 PM
What documentation are you following here?
You shouldn't need to specify PYTHONPATH or PYSPARK_PYTHON as this section is for Spark specific environment variables such as "SPARK_WORKER_MEMORY".
โ08-03-2022 10:39 PM
I'm following the standard Python documentation .. Databricks is compatible with Python AFAIK
This approach works when using "traditional" jobs, but not when using tasks in workflows
โ08-03-2022 10:48 PM
Could you please try this instead?
import sys
sys.path.append("/Workspace/Repos/devops/mlhub-mlops-dev/src")
You need to do sys.path.append in the udf if the lib need to available on workers.
from pyspark.sql.functions import *
def move_libs_to_executors():
import sys
sys.path.append("/Workspace/Repos/devops/mlhub-mlops-dev/src")
lib_udf = udf(move_libs_to_executors)
df = spark.range(100)
df.withColumn("lib", lib_udf()).show()
โ08-03-2022 11:25 PM
I'm already using this "fix", but this goes against good development practices because you are hardcoding a filepath in your code. This filepath should be provided via a parameter, this is the reason that in most solutions ENVIRONMENT VARIABLES are used for , because the path might change at deployment time.
And as I mentioned before, following the Databricks documentation, you should be able to set environment variables using the spark_env_vars section. Is there anything wrong with my initial approach?
โ08-05-2022 09:25 AM
@Fran Pรฉrezโ I did a little research on this and found that currently PYTHONPATH will be overwritten on cluster startup time and there is no way to redefine it at this time. At this point we would recommend using the already defined PYTHONPATH directories for your libraries or just using user libraries for this.
To see the PYTHONPATH that's set by default you can run:
%sh echo $PYTHONPATH
as a separate cell in a notebook that's attached to your cluster.
โ12-25-2022 05:52 PM
This won't work for editable library as editable library is append path using site package from easy-install.pth
โ08-30-2022 10:07 AM
Hi @Fran Pรฉrezโ,
Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group