09-23-2024 08:08 AM
I have huge datasets, transformation, display, print, show are working well on this data when read in a pandas dataframe. But the same dataframe when converted to a spark dataframe, is taking minutes to display even a single row and hours to write the data in a delta table.
09-23-2024 08:39 AM
Can you please share the code snippet?
09-23-2024 09:02 AM
09-23-2024 09:01 AM
09-23-2024 10:53 AM
3 mins to write 5 rows is no good.
Are you running this on a shared cluster with so many other jobs? Will it be possible to test this on a personal cluster to isolate the issue?
try displaying the data frame in one cell display(df) and save the data frame in another cell.
09-24-2024 04:16 AM
The cluster that I was using to execute this was not performing any other tasks, although the azure quota for this cluster family cpu was 83% at the time, I created a new cluster belonging to a family which had all the cores available, there spark is working well. But even at 83% utilization, should that cluster (the one used earlier, with high memory) perform so poorly?
09-24-2024 05:10 AM
It's good to hear it worked on the new cluster family.
If the quota is already at 83%, the number of nodes your cluster needs is important. If Azure is not able to provision that many resources, it could result in suboptimal performance.
To find out this, please reduce the number of nodes so your cluster can start the job and complete it.
09-26-2024 02:40 AM
Earlier I was using EA family of clusters which were memory optimized, now when I shifted to general purpose compute, the same data is getting written in seconds. Is it that the EA family of memory optimized clusters are not very performant for spark operations?
09-26-2024 03:31 AM
For processing 5 rows, EA vs. Non-EA doesn't matter.
As you mentioned before, it could be non availability of the cluster in the quota.
09-26-2024 08:05 AM
But even with General Purpose compute (256 GB memory, 64 cores, 8 max worker nodes, working solely on one task, i.e. one notebook) I am not able to write one dataframe as delta table, it contains geospatial data and must have data in lakhs
09-26-2024 08:39 AM
It could be Skew, your partition anything.
Without looking at the script, and knowing the schema, number of rows, and output of Spark UI, it's hard to say what is wrong.
09-26-2024 08:43 AM
09-26-2024 08:47 AM
🙂 count() is just the action.
What are the transformations you are doing in the data frame? How many columns, how many rows you are anticipating approximately?
09-26-2024 08:52 AM
I want to write that data in a table, but it always get stuck, it has 12 columns, the task was getting stuck, that is why I wanted to see count of data
09-26-2024 08:55 AM
One last time, please share the entire script of the data frame so I can see how I can help.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group