cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Confused with databricks Tips and Tricks - Optimizations regarding partitining

eimis_pacheco
Contributor

Hello Community,

Today I was in Tips and Tricks - Optimizations webinar and I started being confused, they said:

"Don't partition tables <1TB in size and plan carefully when partitioning

โ€ข Partitions should be >=1GB"

 

Now my confusion is if this recommendation is given for storing the data on disk while writing it at the end of the spark job? or does this apply when we are running a job and we are doing some transformations and we want to split this data into more executors so that this run faster? I mean if I want to partition a table that is close to 1TB to make the job faster by splitting the same let's say in 10 executors should I not do this?

 

Thank you in advance for the clarification.

 

Thanks

#dataengineering

3 REPLIES 3

Hi @Retired_mod ,

Sorry for asking again but this recommendation of "don't partition tables <1TB" but this recommendation applies when this is written on disk or when the job is actually running in memory too? (sorry if the answer is obvious)

Because in clusters we have GBs of memory to allocate to our datasets not TBs.

eimis_pacheco_0-1697575164841.png

Thank you once more time

 

Hi @Retired_mod Thanks for the tricks. 
I have a table where roughly 1.7 TB data is being ingested daily. 
I have partitioned it on date. In s3 I can see the size of files for one day in parquet format as 1.7 TB. 
Should I change the partition values ? Is the partition size too big?

 

-werners-
Esteemed Contributor III

that is partitions on disk.


Defining the correct amount of partitions is not that easy.  One would think that more partitions is better because you can process more data in parallel.
And that is true if you only have to do local transformations (no shuffle needed).
But that is almost never the case.
When a shuffle is applied, having more partitions means more overhead.  That is why the recommendation is to not partition below 1TB (that is only a recommendation though, you might have cases where partitioning makes sense with smaller data).

Recently there is liquid clustering, which makes things easier (no partitioning necessary), but it only works for delta lake and recent databricks versions.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group