Hi Team
I am currently working on a project to read CSV files from an AWS S3 bucket using an Azure Databricks notebook. My ultimate goal is to set up an autoloader in Azure Databricks that reads new files from S3 and loads the data incrementally. However, I am facing issues accessing the S3 bucket from the notebook. Despite creating a new user in AWS and granting it full permissions on the S3 bucket, I am still encountering the following message:

Here is the code I build for auto loader:
import boto3
from pyspark.sql import SparkSession
# Initialize Spark session
spark = SparkSession.builder.appName("S3Access").getOrCreate()
# AWS credentials
access_key = AWS_ACCESS_KEY_ID
secret_key = AWS_SECRET_ACCESS_KEY
# Configure Spark to use AWS credentials
hadoop_conf = spark._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
# hadoop_conf.set("fs.s3a.session.token", aws_session_token)
hadoop_conf.set("fs.s3a.endpoint", "s3.amazonaws.com")
# S3 bucket and file path
s3_bucket = "s3a://taxcom-autoloader/files/file1.csv"
# Read CSV file from S3
df = spark.read.csv(s3_bucket)
# Show the data
df.show()
If anyone could provide insights on this process, it would be greatly appreciated. Thank you for your help!
Thanks,
Dinesh Kumar