After spark finishes writing the dataframe to S3, it seems like it checks the validity of the files it wrote with: `getFileStatus` that is `HeadObject`
behind the scenes.
What if I'm only granted write and list objects permissions but not GetObject? Is there any way instructing pyspark on databricks to not do this validity test after a successful write?