Hi @rakesh sainiโ , since you note this is a problem when you're loading into Delta, can you provide more detail on the type of source data that you are trying to load into Delta, such as the data format (JSON, csv, etc)? Typically, a hanging job is due to the read and transform stages, not the write stages.
Other useful information for us to better assist
- A screenshot of the Explain Plan, and/or the DAG in the Spark UI
- A screenshot of the cluster metrics, e.g. from the Ganglia UI in Databricks. Perhaps there is a memory or CPU bottleneck.
- The specs of your Spark cluster. Node types, # of workers, etc.