Efficiently Delete/Update/Insert Large Datasets of Records in PostgreSQL from Spark DataFrame

skohade1
New Contributor II

So I am migrating my ETL Process from Pentaho to Databricks I am using pyspark.

I have posted all the details here Staging Ground: How to Insert,Update,Delete data using databricks for large records in PostgreSQL fr...
Can anyone please help me giving the solution to optimize the way to do the operations?