I am pulling data from an S3 bucket using spark.read.json like this
s3_uri = "s3://snowflake-genesys/v2.outbound.campaigns.{id}/2025-01-22/00/"
df = spark.read.json(s3_uri)
My s3 url has the {id} in the file path. I have used r
"s3://snowflake-genesys/v2.outbound.campaigns.{id}/2025-01-22/00/" and f"s3://snowflake-genesys/v2.outbound.campaigns.{{id}}/2025-01-22/00/".
I can get an dbutils.fs.ls(f"s3://snowflake-genesys/v2.outbound.campaigns.{{id}}/2025-01-22/00/") to return the files for me.
I can get it to work with a wild card but that's not optimal because I have other large folders between campaigns and /2025-01-22/. This returns way to much data:
"s3://snowflake-genesys/v2.outbound.campaigns.*/2025-01-22/00/". I have other folders like this v2.outbound.campaigns.{id}.progress/2025-01-22/
How can I get spark to appropriately read the file. What am I doing wrong?