cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

"AWS S3 resource has been disabled" error on job, not appearing on notebook

jvk
New Contributor III

I am getting an "INTERNAL_ERROR" on a databricks job submitted through the API. Which says:

"Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled"

However, when I click on the notebook created by the job and run this through the same cluster there are no errors and I am able to access the data.

Why would this be the case?Thanks

4 REPLIES 4

Kaniz_Fatma
Community Manager
Community Manager

Hi @jvkThe โ€œINTERNAL_ERRORโ€ youโ€™re encountering in your Databricks job, along with the message โ€œRun result unavailable: run failed with error message All access to AWS S3 resource has been disabled,โ€ indicates that thereโ€™s an issue related to accessing AWS S3 resources. 

  1. AWS Credentials Configuration:

    • Ensure that the AWS credentials (access key and secret key) are correctly configured for your Databricks job. If the credentials are missing or incorrect, it can lead to the โ€œaccess disabledโ€ error.
    • Verify that the same credentials are being used when you run the notebook manually. Sometimes, different credentials might be set in different contexts, causing discrepancies.
  2. IAM Roles vs. AWS Keys:

  3. Instance Profiles and IAM Role Passthrough:

  4. Sometimes, issues with the native Parquet reader can cause S3-related errors. You can work around this problem by disabling the native Parquet reader in your Spark configuration:
     
    spark.conf.set("spark.databricks.io.parquet.nativeReader.enabled", False)
    
  5. This might help resolve any compatibility issues with S3 data access.

         Please review your AWS setup, credentials, and configurations.

jvk
New Contributor III

Hi @Kaniz_Fatma , thanks for your help. I will look into these issues. 

jvk
New Contributor III

Hi @Kaniz_Fatma , I have tried you suggestions but no luck.

It works fine when I am running via notebook, but when I put that same notebook into a job run and run that, I get the error:

"Run result unavailable: run failed with error message Cannot access AWS bucket"

jvk
New Contributor III

@Kaniz_Fatma 

In the s3 logs of the run, I am seeing this:

24/05/29 06:39:30 WARN FileSystem: Failed to initialize fileystem dbfs:///: java.io.FileNotFoundException: Bucket user-workspace-s3-bucket does not exist
24/05/29 06:39:30 ERROR DbfsHadoop3: Failed to close FileSystem
java.lang.NullPointerException

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group