by
nadia
• New Contributor II
- 15005 Views
- 2 replies
- 2 kudos
Hello, I'm trying to read a table that is located on Postgreqsl and contains 28 million rows. I have the following result:"SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in sta...
- 15005 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Boumaza nadia​ ,Did you check the executor 3 logs when the cluster was active? if you get this error message again, I will highly recommend to check the executor's logs to be sure on what was the cause of the issue.
1 More Replies
- 4340 Views
- 7 replies
- 2 kudos
I tried to read a file from S3, but facing the below error:org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 53.0 failed 4 times, most recent failure: Lost task 0.3 in stage 53.0 (TID 82, xx.xx.xx.xx, executor 0): com...
- 4340 Views
- 7 replies
- 2 kudos
Latest Reply
Which DBR version are you using? Could you please test it with a different DBR version probably DBR 9.x?
6 More Replies
by
Mayank
• New Contributor III
- 4651 Views
- 8 replies
- 4 kudos
I am trying to load parquet files using Autoloader. Below is the code def autoload_to_table (data_source, source_format, table_name, checkpoint_path):
query = (spark.readStream
.format('cloudFiles')
.option('cl...
- 4651 Views
- 8 replies
- 4 kudos
Latest Reply
Hi again @Mayank Srivastava​ Thank you so much for getting back to us and marking the answer as best.We really appreciate your time.Wish you a great Databricks journey ahead!
7 More Replies
by
nadia
• New Contributor II
- 859 Views
- 1 replies
- 0 kudos
I use Databricks and I try to connect to posgresql via the following code"jdbcHostname = "xxxxxxx"jdbcDatabase = "xxxxxxxxxxxx"jdbcPort = "5432"username = "xxxxxxx"password = "xxxxxxxx"jdbcUrl = "jdbc:postgresql://{0}:{1}/{2}".format(jdbcHostname, jd...
- 859 Views
- 1 replies
- 0 kudos
Latest Reply
hi @Boumaza nadia​ Please check the Ganglia metrics for the cluster. This could be a scalability issue where cluster is overloading. This can happen due to a large partition not fitting into the given executor's memory. To fix this we recommend bump...
- 3301 Views
- 2 replies
- 1 kudos
Please could you suggest best cluster configuration for a use case stated below and tips to resolve the errors shown below -Use case:There could be 4 or 5 spark jobs that run concurrently.Each job reads 40 input files and spits out 120 output files ...
- 3301 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Vetrivel Senthil​ , Just a friendly follow-up. Do you still need help? Please let us know.
1 More Replies
- 2266 Views
- 3 replies
- 0 kudos
Job aborted due to stage failure: Task 0 in stage 3084.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3084.0 (TID...., ip..., executor 0): org.apache.spark.SparkExecution: Task failed while writing rowsJob aborted due to stage failure:...
- 2266 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @Vetrivel Senthil​ , Are you still facing the problem? Were you able to resolve it by yourself, or do you still need help? Please let us know.
2 More Replies
- 675 Views
- 1 replies
- 0 kudos
Job aborted due to stage failure: Task 12 in stage 1446.0 failed 4 times, most recent failure: Lost task 12.3 in stage 1446.0 (TID 2922) (10.24.175.143 executor 41): ExecutorLostFailure (executor 41 exited caused by one of the running tasks) Reason: ...
- 675 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Shahul Hameed​ , Can you please share the command used while getting this error?
- 2473 Views
- 5 replies
- 3 kudos
Loaded a csv file with five columns into a dataframe, and then added around 15+ columns using dataframe.withColumn method.After adding these many columns, when I run the query df.rdd.isEmpty() - which throws the below error. org.apache.spark.SparkExc...
- 2473 Views
- 5 replies
- 3 kudos
Latest Reply
@Thushar R​ - Thank you for your patience. We are looking for the best person to help you.
4 More Replies