I use Databricks and I try to connect to posgresql via the following code
"jdbcHostname = "xxxxxxx"
jdbcDatabase = "xxxxxxxxxxxx"
jdbcPort = "5432"
username = "xxxxxxx"
password = "xxxxxxxx"
jdbcUrl = "jdbc:postgresql://{0}:{1}/{2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : username,
"password" : password,
"driver" : "org.postgresql.Driver"
}
df = spark.read.jdbc(url=jdbcUrl, table= "xxxxxxxxx" , properties=connectionProperties)"
I try to read a table that is 28 million rows and here is the error message;
"SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 6) (10.139.64.5 executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 150527 ms"
Could you help me please
Thanks