04-12-2022 11:25 PM
Hi All,
We are facing one un-usual issue while loading data into Delta table using Spark SQL. We have one delta table which have around 135 columns and also having PARTITIONED BY. For this trying to load 15 millions of data volume but its not loading data into delta table even after executing command from last 5 hrs. There is one more table which has around 15 columns and data volume is around 25 millions is processing properly and command executed within 5-10 min. Can anyone please help me here to understand the issue.
Thanks.
04-13-2022 09:19 AM
@rakesh saini , Partition By works best with medium-cardinality data and data that is >100GBS, anything that doesn't fit those two categories won't be a great candidate for partitioning. Instead, you should call OPTIMIZE, which speeds up your operations using Z-ordering. I'd also recommend that you check out the documentation on optimizing performance using file management.
04-14-2022 02:52 AM
Thanks @George Chirapurath for reply ,
We are facing this issue when we load the data into delta first time.
04-15-2022 04:10 PM
Hi @rakesh saini , since you note this is a problem when you're loading into Delta, can you provide more detail on the type of source data that you are trying to load into Delta, such as the data format (JSON, csv, etc)? Typically, a hanging job is due to the read and transform stages, not the write stages.
Other useful information for us to better assist
04-26-2022 06:16 AM
Hi @Kaniz Fatma thanks for the follow up.
Yes , I am still facing same issue so as @Parker Temple mentioned about cluster configurations i.e memory, number of worker nodes etc. so will try to upgrade my ADB cluster first and then will re-load data. Currently am using cluster with 16GB memory space and 3 worker nodes.
04-27-2022 08:27 AM
@Kaniz Fatma @Parker Temple I found an root cause its because of serialization. we are using UDF to drive an column on dataframe, when we are trying to load data into delta table or write data into parquet file we are facing serialization issue . Can you please help to provide best way to create UDFs or an alternate way for UDF in Scala where its should be have an return type (with some example).
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group