Please could you suggest best cluster configuration for a use case stated below and tips to resolve the errors shown below -
Use case:
There could be 4 or 5 spark jobs that run concurrently.
Each job reads 40 input files and spits out 120 output files to s3 in csv firmat( three times of input file)
All concurrent jobs read the same 39 input files and just one file that will have the variation for a job
Often the jobs fail with the following errors:
Job aborted due to stage failure: Task 0 in stage 3084.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3084.0 (TID...., ip..., executor 0): org.apache.spark.SparkExecution: Task failed while writing rows
Job aborted due to stage failure: Task 0 in stage 3078.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3078.0 (TID...., ip..., executor 0): java.io.interruptedExecution: getFileStatus on s3:<file path> : com.amazonaws.SdkClientException: Unable to execute HTTP request. Timeout waiting for connection from pool
Given below is my spark_conf
new SparkConf()
.set("spark.serializer", classOf[KryoSerializer].getName)
.set("spark.hadoop.fs.s3z.impl", "org.apache.hadoop.fs.s3a.s3AFileSystem")
.set("spark.hadoop.fs.s3a.connection.maximum", 400)
.set("fs.s3a.threads.max",200)
.set("spark.hadoop.fs.s3a.fast.upload",true)
Spark UI , Environment section shows
spark.hadoop.fs.s3a.connection.maximum = 200
fs.s3a.threads.max = 136
and does not align with my setting
Questions:
(1) What needs to be done for caching input files that are read for subsequent concurrent jobs to use? Would Storage optimized , Delta cache cluster config do this
(2) Why are'nt the numbers in SparkUI Environment match with my Spark conf setting
(3) How to resolve these job errors
Thanks,
Vee