You shouldn't need any packages. You can mount S3 bucket to Databricks cluster.
https://docs.databricks.com/spark/latest/data-sources/aws/amazon-s3.html#mount-aws-s3
or this
http://www.sparktutorials.net/Reading+and+Writing+S3+Data+with+Apache+Spark...
Hi there,
Did you try writing to your mount point location?
dfGPS.write.mode("overwrite").format("com.databricks.spark.csv").option("header","true").csv("/mnt/<mount-name>")
There is a related post here to configure the appropriate Hadoop properties...
This was recommended on StackOverflow though I haven't tested with ADLS yet.
sc._jsc.hadoopConfiguration().set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
Note it may impact the whole cluster.
You could also use dbutils.fs.rm step t...