cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

"AWS S3 resource has been disabled" error on job, not appearing on notebook

jvk
New Contributor III

I am getting an "INTERNAL_ERROR" on a databricks job submitted through the API. Which says:

"Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled"

However, when I click on the notebook created by the job and run this through the same cluster there are no errors and I am able to access the data.

Why would this be the case?Thanks

2 REPLIES 2

Kaniz
Community Manager
Community Manager

Hi @jvkThe “INTERNAL_ERROR” you’re encountering in your Databricks job, along with the message “Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled,” indicates that there’s an issue related to accessing AWS S3 resources. 

  1. AWS Credentials Configuration:

    • Ensure that the AWS credentials (access key and secret key) are correctly configured for your Databricks job. If the credentials are missing or incorrect, it can lead to the “access disabled” error.
    • Verify that the same credentials are being used when you run the notebook manually. Sometimes, different credentials might be set in different contexts, causing discrepancies.
  2. IAM Roles vs. AWS Keys:

  3. Instance Profiles and IAM Role Passthrough:

  4. Sometimes, issues with the native Parquet reader can cause S3-related errors. You can work around this problem by disabling the native Parquet reader in your Spark configuration:
     
    spark.conf.set("spark.databricks.io.parquet.nativeReader.enabled", False)
    
  5. This might help resolve any compatibility issues with S3 data access.

         Please review your AWS setup, credentials, and configurations.

jvk
New Contributor III

Hi @Kaniz , thanks for your help. I will look into these issues.