cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

sparklyr::spark_read_csv forbidden 403 error

thethirtyfour
New Contributor III

Hi,

I am trying to read a csv file into a Spark DataFrame using sparklyr::spark_read_csv. I am receiving a 403 access denied error.

I have stored my AWS credentials as environment variables, and can successfully read the file as an R dataframe using arrow::read_csv_arrow. However, spark_read_csv is failing.

 
I have confirmed that I am connected to spark, and can read parquet files stored elsewhere.
 
Any advice?
 
Thanks,
 

my_file <- glue::glue("s3://my-bucket/my-folder/my-file-name.csv")

## This works
mydata <- arrow::read_csv_arrow(
file = my_file
)
## This doesn't
mydata <- sparklyr::spark_read_csv(
sc,
name = "mydata"
file = my_file
)

# Error message
Error : java.nio.file.AccessDeniedException

Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden; request

0 REPLIES 0

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now