When running in spark shell, a Failure is received as shown below. I wonder why it behaves differently.
scala> Try(spark.read.format("parquet").load("/abcd/abcd/"))
res1: scala.util.Try[org.apache.spark.sql.DataFrame] = Failure(org.apache.spark.sql.AnalysisException: [PATH_NOT_FOUND] Path does not exist: file:/abcd/abcd.)