cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Using shared python wheels for job compute clusters

Mr__E
Contributor II

We have a GitHub workflow that generates a python wheel and uploads to a shared S3 available to our Databricks workspaces. When I install the Python wheel to a normal compute cluster using the path approach, it correctly installs the Python wheel and I can use the library. However, when I install to a job compute cluster, I receive the following error:

Run result unavailable: job failed with error message Library installation failed for library due to user error for whl: "s3://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl" . Error messages: java.lang.RuntimeException: ManagedLibraryInstallFailed: java.util.concurrent.ExecutionException: java.nio.file.AccessDeniedException: s3a://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl: getFileStatus on s3a://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden; request: HEAD https://shared-python-packages.s3-us-west-2.amazonaws.com nanads-0.0.latest-py3-none-any.whl

How do I give the job clusters the correct access?

1 ACCEPTED SOLUTION

Accepted Solutions

Mr__E
Contributor II

Yeah, it was an authentication issue. Turns out the compute clusters were set up with instance profiles, but never the job profiles, so when the wheel process was set up it failed for jobs.

TL;DR: you need to apply instance profiles for access to shared resources on the compute cluster.

View solution in original post

5 REPLIES 5

Hubert-Dudek
Esteemed Contributor III

You can mount S3 as a DBFS folder then set that library in "cluster" -> "libraries" tab -> "install new" -> "DBFS"

image.png 

Thanks! This is what I'm already doing. It works fine for normal compute clusters, but it doesn't work for job clusters and gives the error mentioned above.

Hubert-Dudek
Esteemed Contributor III

yes, but you put in File path "s3://shared-python-packages..." not "/your_mount/shared-python-packages.."? (neither s3 path which includes access token)

It looks like an authentication problem. Doing a permanent mount could solve the issue.

dbutils.fs.mount("s3a://%s:%s@%s" % (access_key, encoded_secret_key, aws_bucket_name), "/mnt/%s" % mount_name)

more here https://docs.databricks.com/data/data-sources/aws/...

Mr__E
Contributor II

Yeah, it was an authentication issue. Turns out the compute clusters were set up with instance profiles, but never the job profiles, so when the wheel process was set up it failed for jobs.

TL;DR: you need to apply instance profiles for access to shared resources on the compute cluster.

Prabakar
Esteemed Contributor III
Esteemed Contributor III

@Erik Louie​ , we are glad that the issue is resolved. Could you please mark the best answer, so that the thread can be closed and will be helpful for other to refer.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!