04-02-2022 04:02 AM
We have a GitHub workflow that generates a python wheel and uploads to a shared S3 available to our Databricks workspaces. When I install the Python wheel to a normal compute cluster using the path approach, it correctly installs the Python wheel and I can use the library. However, when I install to a job compute cluster, I receive the following error:
Run result unavailable: job failed with error message Library installation failed for library due to user error for whl: "s3://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl" . Error messages: java.lang.RuntimeException: ManagedLibraryInstallFailed: java.util.concurrent.ExecutionException: java.nio.file.AccessDeniedException: s3a://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl: getFileStatus on s3a://shared-python-packages/mywheel-0.0.latest-py3-none-any.whl: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden; request: HEAD https://shared-python-packages.s3-us-west-2.amazonaws.com nanads-0.0.latest-py3-none-any.whl
How do I give the job clusters the correct access?
04-02-2022 09:42 AM
Yeah, it was an authentication issue. Turns out the compute clusters were set up with instance profiles, but never the job profiles, so when the wheel process was set up it failed for jobs.
TL;DR: you need to apply instance profiles for access to shared resources on the compute cluster.
04-02-2022 05:34 AM
04-02-2022 08:07 AM
Thanks! This is what I'm already doing. It works fine for normal compute clusters, but it doesn't work for job clusters and gives the error mentioned above.
04-02-2022 08:50 AM
yes, but you put in File path "s3://shared-python-packages..." not "/your_mount/shared-python-packages.."? (neither s3 path which includes access token)
It looks like an authentication problem. Doing a permanent mount could solve the issue.
dbutils.fs.mount("s3a://%s:%s@%s" % (access_key, encoded_secret_key, aws_bucket_name), "/mnt/%s" % mount_name)
more here https://docs.databricks.com/data/data-sources/aws/...
04-02-2022 09:42 AM
Yeah, it was an authentication issue. Turns out the compute clusters were set up with instance profiles, but never the job profiles, so when the wheel process was set up it failed for jobs.
TL;DR: you need to apply instance profiles for access to shared resources on the compute cluster.
04-05-2022 12:09 AM
@Erik Louie , we are glad that the issue is resolved. Could you please mark the best answer, so that the thread can be closed and will be helpful for other to refer.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group