Hi Isi,
Thank you for your response โ I really appreciate it ๐
Apologies, I didnโt explain my concern clearly.
What Iโm trying to confirm may be whether the instance profile overrides the spark.conf settings defined in a notebook.
For example, I want to access csv on S3 using the following code:
```python
# gloabal lebel
spark.conf.set("spark.hadoop.fs.s3a.endpoint", "s3.amazonaws.com")
spark.conf.set('spark.hadoop.fs.s3a.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider')
spark.conf.set('spark.hadoop.fs.s3a.server-side-encryption-algorithm', 'SSE-KMS')
# Set credentials using Databricks secrets (after SparkSession is created)
spark.conf.set(f'spark.hadoop.fs.s3a.bucket.{source_bucket}.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider')
spark.conf.set(f"spark.hadoop.fs.s3a.bucket.{source_bucket}.endpoint", "s3.ap-northeast-1.amazonaws.com")
spark.conf.set(f"spark.hadoop.fs.s3a.bucket.{source_bucket}.access.key", source_access_key)
spark.conf.set(f"spark.hadoop.fs.s3a.bucket.{source_bucket}.secret.key", source_secret_key)
spark.conf.set(f"spark.hadoop.fs.s3a.bucket.{source_bucket}.region", source_region)
df = spark.read.option("header", True).csv(source_path)
```
I can access S3 via boto3, but I can't access it...
The error message is like below.
`: java.nio.file.AccessDeniedException: s3a://<source_bucket>/foo.csv: getFileStatus on s3a://<source_bucket>/foo.csv: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden; request: HEAD`
I doubt that the issue is caused by the instance profile overwriting credentials. I apologize if my hypothesis caused any misunderstanding of the current status.
Finally, my cluster is Dedicated mode now, thank you for your advice again.
Thank you.