I have a databricks notebook that writes data from a parquet file with 4 million records into a new delta table. Simple script. It works fine when I run it from the Databricks notebook using the cluster with config in the screenshot below. But I run the through an ADF pipeline where we spin up a dynamic cluster with config below it fails with error below. Can you please suggest? Thanks in advance.

ADF dynamic Pyspark Cluster:
ClusterNode: Standard_D16ads_v5
ClusterDriver: Standard_D32ads_v5
ClusterVersion: 15.4.x-scala2.12
Clusterworkers: 2:20
I see the executor memory here is: 19g
offheap memory: 500 MB
Databricks Pyspark cluster:

I see the executor memory here is: 12g
offheap memory: 36GB