06-12-2022 02:19 PM
Hello, I'm trying to read a table that is located on Postgreqsl and contains 28 million rows. I have the following result:
"SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.139.64.6 executor 3): ExecutorLostFailure (executor 3 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 161734 ms"
Could you help me please?
Thanks
07-05-2022 07:50 AM
This could be because of two reasons, either scalability or timeout.
For scalability - You can consider increasing the node type.
For timeout - you can set the below in the cluster spark config.
spark.executor.heartbeatInterval 300s
spark.network.timeout 320s
07-05-2022 07:50 AM
This could be because of two reasons, either scalability or timeout.
For scalability - You can consider increasing the node type.
For timeout - you can set the below in the cluster spark config.
spark.executor.heartbeatInterval 300s
spark.network.timeout 320s
07-07-2022 05:26 PM
Hi @Boumaza nadia ,
Did you check the executor 3 logs when the cluster was active? if you get this error message again, I will highly recommend to check the executor's logs to be sure on what was the cause of the issue.
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.