Environment details:
DataBricks on Azure, 13.3 LTS, Unity Catalog, Shared Cluster mode.
Currently in the environment I'm in, we run imports from S3 with code like:
spark.read.option('inferSchema', 'true').json(s3_path).
When running on a cluster in Shared Mode with Unity Catalog enabled, I get this error:
"Import for <table> failed with error: An error occurred while calling o453.json. : org.apache.spark.SparkSecurityException: [INSUFFICIENT_PERMISSIONS] Insufficient privileges: User does not have permission SELECT on any file."
There's a proposed workaround , but this isn't possible since I don't have admin access and the admins don't want to bypass all the security controls provided by Unity Catalog. Running the code in Single User mode works with no issues, but having a bunch of Single User mode clusters to support my team isn't a feasible solution.
Basic question is: what mechanisms can be used to import S3 data into a Unity Catalog enabled Shared Cluster environment, if any, without resorting to being a cluster admin?