Hi Team,
We are trying to connect to Amazon S3 bucket from both Databricks running on AWS and Azure using IAM access keys directly through Scala code in Notebook and we are facing com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden; with status code as 403. But by using the same credentials we were able to access it from AWS CLI.
Sample code that we ran in Notebook
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder
.appName("S3 Access")
.getOrCreate()
// Set AWS access key and secret key
spark.conf.set("spark.hadoop.fs.s3a.access.key", "***********")
spark.conf.set("spark.hadoop.fs.s3a.secret.key", "**********************")
// Set the S3 endpoint URL if needed
spark.conf.set("spark.hadoop.fs.s3a.endpoint", "s3.us-east-2.amazonaws.com")
// Read a file from the S3 bucket into a DataFrame
val df = spark.read.parquet("s3://<parquet_file_path>")
df.show()
Thanks,
Obul.