Dear all,
Few questions please -
1. Has anyone successfully used the below way of dealing with error handling in PySpark (example: that contains data frames) as well as SQL code based notebooks -
from pyspark.errors import PySparkException
try:
spark.sql("SELECT * FROM does_not_exist").show()
except PySparkException as ex:
print("Error Class : " + ex.getErrorClass())
print("Message parameters: " + str(ex.getMessageParameters()))
print("SQLSTATE : " + ex.getSqlState())
print(ex)
2. With this approach, is it advisable to log errors into tables? I think of having an errors table with 4 columns to capture the date, error class, message parameter and the sqlstate.
3. Currently, we are logging all errors as ".txt" files in an ADLS storage account. The idea is to produce an operational dashboard on the top of the errors. I think table based error logging could be more simpler to report in contrast to profiling the ADLS storage account/containers/folders periodically and report then.
4. Also, I noticed that when we capture and log errors as ".txt" files, the error message at times is very detailed spanning into 100s of lines; not sure if it is the same on your end.
Appreciate a fruitful discussion on this.