Try(spark.read.format("parquet").load("s3://abcd/abcd/")) should result in Failure, but when executed in the notebook, it returns Success as shown below. Isn't this a bug?Try[DataFrame] = Success(...)
When running in spark shell, a Failure is received as shown below. I wonder why it behaves differently.scala> Try(spark.read.format("parquet").load("/abcd/abcd/"))
res1: scala.util.Try[org.apache.spark.sql.DataFrame] = Failure(org.apache.spark.sql.An...