Hey @MrDataMan,
I wasn't able to reproduce the exact same error you did get, but I still got a similar error while trying to run the example. To solve it, I tweaked the code a little bit:
%sh curl https://resources.lendingclub.com/LoanStats3a.csv.zip --output /dbfs/tmp/LoanStats3a.csv.zip
unzip /dbfs/tmp/LoanStats3a.csv.zip -d /dbfs/tmp/
As you can see, I have changed the output location of the curl command and I have specified the destination of the unzip command so that both point to DBFS instead of the root tmp/ directory.
Then we can read it using Spark:
df = spark.read.format("csv").option("skipRows", 1).option("header", True).load("dbfs:/tmp/LoanStats3a.csv")
display(df)
Note: Access to DBFS is required for this example.
Thanks,
Gab