@Retired_mod , thanks for the detailed suggestions.
I believe the first reference relates to the issue; however, after adjusting spark.driver.maxResultSize to various values - e.g., 10g, 20g, 30g - a new error ensues (see below).
The operation involves a collect() on a Delta table with 380 MM rows and 5 columns (3.2GB, partitioned into 55 files). If the average row size is 48Bytes (per initial error), shouldn't 20GBytes be sufficient?
New Error
The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.
at com.databricks.spark.chauffeur.Chauffeur.onDriverStateChange(Chauffeur.scala:1367)