โ12-12-2022 10:14 PM
I am trying to save a dataframe after a series of data manipulations using Udf functions to a delta table. I tried using this code
(
df
.write
.format('delta')
.mode('overwrite')
.option('overwriteSchema', 'true')
.saveAsTable('output_table')
)
but this is taking more than 2 hours. So I converted the dataframe into a sql local temp view and tried saving the df as a delta table from that temp view, this worked for one of the notebooks(14 minutes) but for other notebooks this is also taking around 2 hours to write to the delta table. Not sure why this is happening for a very small dataset. Any solution is appreciated.
code:
df.createOrReplaceTempView("sql_temp_view")
%sql
DROP TABLE IF EXISTS default.output_version_2;
create table default.output_version_2
select * from sql_temp_view
โ12-12-2022 10:43 PM
What is the cluster config you are using ? Also what sort of transformations are being done before your final dataframe is getting created ?
โ12-12-2022 10:59 PM
โ12-12-2022 11:23 PM
Can you also give the number of partitions the df has ?
you can use df.rdd.getNumPartitions()
โ12-12-2022 11:45 PM
96 partitions
โ12-13-2022 02:23 AM
Since data is too low, try repartitioning that data before you write using repartition or coalesce.
โ01-04-2023 01:40 AM
I too have similar issue, the no.of partition is 1 at table level and transformation only appyling like date, decimal(20, 2)..etc using withColumn. 5 worker nodes.
1,80,890 records taking 10min time. - how to improve the performance and what are the possible ways to find where it is taking time ?
โ09-10-2024 07:46 AM
Same issue I am having read/write takes long time around 10hrs, data size was 21gb
โ12-12-2022 11:31 PM
Hi @Suresh Kakarlapudiโ what is your file size??
โ12-13-2022 01:00 AM
35 MB
โ01-04-2023 09:15 PM
Is your databricks workspace set up as vnet injection by any chance?
โ08-23-2023 01:35 PM
@Jfoxyyc i am having similar problem and cam across the post. Do vnet injection cause this as my workspace is set up like that
โ08-24-2023 07:57 AM
You should also look into the sql plan if the writing phase is indeed the part that is taking time. Since spark works on lazy evaluation, there might be some other phase that might be taking time
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group