Hi!
We have job, that runs every hour. It extracts data from the API and saves to the databricks table.
Sometimes job fails with error "org.apache.spark.SparkException". Here is the full error:
An error occurred while calling o7353.saveAsTable.
: org.apache.spark.SparkException: Job aborted.
...
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 177.0 failed 4 times, most recent failure: Lost task 1.3 in stage 177.0 (TID 240) (...): java.lang.NullPointerException
It's happening at the moment of saving data to the table.
I want to understand why it is happening and it is possible to solve this problem.
Thank you!