cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to Access S3 from Serverless but Works on Cluster

HarryRichard08
New Contributor II

Hi everyone,

I am trying to access data from S3 using an access key and secret. When I run the code through Databricks clusters, it works fine. However, when I try to do the same from a serverless cluster , I am unable to access the data.

I have already checked:

  • The credentials (access key and secret) are correct.
  • databricks scope

Could someone guide me on what additional configurations or steps I need to follow to make this work from a server?

Any help is greatly appreciated!

Thanks!

3 REPLIES 3

KaranamS
Contributor III

Hi @HarryRichard08 , Can you please provide the sample code you are using? And the error you get with serverless?

Hi Karanam 

Please find the syntax below 

spark.conf.set("fs.s3a.access.key", "xxxxxxxxxxxxxxxxxxxx")
spark.conf.set("fs.s3a.secret.key", "xxxxxxxxxxxxxxxxxxxxxx")

KaranamS
Contributor III

Hi @HarryRichard08, Databricks recommends using instance profiles (IAM roles) to connect to AWS S3 as they provide a secure and scalable method without embedding credentials in a notebook. Have you tried this approach?

https://docs.databricks.com/aws/en/connect/storage/tutorial-s3-instance-profile

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now