I have connected my S3 bucket from databricks.
Using the following command :
import urllib
import urllib.parse
ACCESS_KEY = "Test"
SECRET_KEY = "Test"
ENCODED_SECRET_KEY = urllib.parse.quote(SECRET_KEY, "") AWS_BUCKET_NAME = "Test" MOUNT_NAME = "S3_Connection_details" dbutils.fs.mount("s3n://%s:%s@%s" % (ACCESS_KEY, ENCODED_SECRET_KEY,AWS_BUCKET_NAME), "/mnt/%s" % MOUNT_NAME)
Now when I run the below command, I get the list of csv files present in the bucket.
display(dbutils.fs.ls("/mnt/S3_Connection"))
If there are 10 files, I want to create 10 different tables in postgreSQL after reading the csv files. I don't need any transformation. Is it feasible ?
First of all how to create a dataframe using one of the csv file. If anyone can help me with the syntax.
Regards,
Akash