cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

set PYTHONPATH when executing workflows

FranPรฉrez
New Contributor III

I set up a workflow using 2 tasks. Just for demo purposes, I'm using an interactive cluster for running the workflow.

            {
                "task_key": "prepare",
                "spark_python_task": {
                    "python_file": "file:/Workspace/Repos/devops/mlhub-mlops-dev/src/src/prepare_train.py",
                    "parameters": [
                        "/dbfs/raw",
                        "/dbfs/train",
                        "/dbfs/train"
                    ]
                },
                "existing_cluster_id": "XXXX-XXXXXX-XXXXXXXXX",
                "timeout_seconds": 0,
                "email_notifications": {}
            }

As stated in the documentation, I set up the environment variable in the cluster ... this is the excerpt of the json definition of the cluster:

  "spark_env_vars": {
    "PYSPARK_PYTHON": "/databricks/python3/bin/python3",
    "PYTHONPATH": "/Workspace/Repos/devops/mlhub-mlops-dev/src"
  }

Then, when I execute the task of type Python, and I logged the contents of the sys.path I can't find the path configured in the cluster. If I log the contents of os.getenv('PYTHONPATH'), I get nothing. It looks like the environment variables set up at cluster level are not being promoted to the python task

7 REPLIES 7

tomasz
Databricks Employee
Databricks Employee

What documentation are you following here?

You shouldn't need to specify PYTHONPATH or PYSPARK_PYTHON as this section is for Spark specific environment variables such as "SPARK_WORKER_MEMORY".

FranPรฉrez
New Contributor III

I'm following the standard Python documentation .. Databricks is compatible with Python AFAIK

This approach works when using "traditional" jobs, but not when using tasks in workflows

User16764241763
Honored Contributor

Could you please try this instead?

import sys

sys.path.append("/Workspace/Repos/devops/mlhub-mlops-dev/src")

You need to do sys.path.append in the udf if the lib need to available on workers.

from pyspark.sql.functions import *

def move_libs_to_executors():

import sys

sys.path.append("/Workspace/Repos/devops/mlhub-mlops-dev/src")

lib_udf = udf(move_libs_to_executors)

df = spark.range(100)

df.withColumn("lib", lib_udf()).show()

I'm already using this "fix", but this goes against good development practices because you are hardcoding a filepath in your code. This filepath should be provided via a parameter, this is the reason that in most solutions ENVIRONMENT VARIABLES are used for , because the path might change at deployment time.

And as I mentioned before, following the Databricks documentation, you should be able to set environment variables using the spark_env_vars section. Is there anything wrong with my initial approach?

tomasz
Databricks Employee
Databricks Employee

@Fran Pรฉrezโ€‹ I did a little research on this and found that currently PYTHONPATH will be overwritten on cluster startup time and there is no way to redefine it at this time. At this point we would recommend using the already defined PYTHONPATH directories for your libraries or just using user libraries for this.

To see the PYTHONPATH that's set by default you can run:

%sh echo $PYTHONPATH

as a separate cell in a notebook that's attached to your cluster.

Cintendo
New Contributor III

This won't work for editable library as editable library is append path using site package from easy-install.pth

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Fran Pรฉrezโ€‹,

Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group