Delta Table with 130 columns taking time
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2022 11:25 PM
Hi All,
We are facing one un-usual issue while loading data into Delta table using Spark SQL. We have one delta table which have around 135 columns and also having PARTITIONED BY. For this trying to load 15 millions of data volume but its not loading data into delta table even after executing command from last 5 hrs. There is one more table which has around 15 columns and data volume is around 25 millions is processing properly and command executed within 5-10 min. Can anyone please help me here to understand the issue.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2022 09:19 AM
@rakesh saini , Partition By works best with medium-cardinality data and data that is >100GBS, anything that doesn't fit those two categories won't be a great candidate for partitioning. Instead, you should call OPTIMIZE, which speeds up your operations using Z-ordering. I'd also recommend that you check out the documentation on optimizing performance using file management.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2022 02:52 AM
Thanks @George Chirapurath for reply ,
We are facing this issue when we load the data into delta first time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-15-2022 04:10 PM
Hi @rakesh saini , since you note this is a problem when you're loading into Delta, can you provide more detail on the type of source data that you are trying to load into Delta, such as the data format (JSON, csv, etc)? Typically, a hanging job is due to the read and transform stages, not the write stages.
Other useful information for us to better assist
- A screenshot of the Explain Plan, and/or the DAG in the Spark UI
- A screenshot of the cluster metrics, e.g. from the Ganglia UI in Databricks. Perhaps there is a memory or CPU bottleneck.
- The specs of your Spark cluster. Node types, # of workers, etc.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-26-2022 06:16 AM
Hi @Kaniz Fatma thanks for the follow up.
Yes , I am still facing same issue so as @Parker Temple mentioned about cluster configurations i.e memory, number of worker nodes etc. so will try to upgrade my ADB cluster first and then will re-load data. Currently am using cluster with 16GB memory space and 3 worker nodes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-27-2022 08:27 AM
@Kaniz Fatma @Parker Temple I found an root cause its because of serialization. we are using UDF to drive an column on dataframe, when we are trying to load data into delta table or write data into parquet file we are facing serialization issue . Can you please help to provide best way to create UDFs or an alternate way for UDF in Scala where its should be have an return type (with some example).

