cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

[SOLVED] maxPartitionBytes ignored?

pantelis_mare
Contributor III

Hello all!

I'm running a simple read noop query where I read a specific partition of a delta table that looks like this:

imageWith the default configuration, I read the data in 12 partitions, which makes sense as the files that are more than 128MB are split.

When I configure "spark.sql.files.maxPartitionBytes" (or "spark.files.maxPartitionBytes") to 64MB, I do read with 20 partitions as expected. THOUGH the extra partitions are empty (or some kilobytes)

I have tested with "spark.sql.adaptive.enabled" set to true and false without any change in the behaviour.

Any thoughts why this is happening and how force spark to read in smaller partitions?

Thank you in advance for your help!

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

The AQE will only kick in when you are actually doin transformations (shuffle/broadcast) and it will try to optimize the partition size:

https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/aqe#dynamically-coalesce-pa...

AFAIK the read-partitionsize is indeed defined by maxPartitionBytes.

Now, I do recall a topic on stackoverflow where someone asks a similar question.

And there they mention the compression coded also matters.

Chances are you use snappy compression. If that is the case, the partition size might be defined by the row group size of the parquet files.

https://stackoverflow.com/questions/32382352/is-snappy-splittable-or-not-splittable

http://boristyukin.com/is-snappy-compressed-parquet-file-splittable/

Also David Vrba mentions the compression used too:

https://stackoverflow.com/questions/62648621/spark-sql-files-maxpartitionbytes-not-limiting-max-size...

View solution in original post

7 REPLIES 7

-werners-
Esteemed Contributor III

How did you determine the number of partitions read and the size of these partitions?

The reason I ask is because if your first read the data and then immediately wrote it to another delta table, there is also auto optimize on delta lake, which tries to write 128MB files.

(spark.databricks.delta.autoCompact.maxFileSize)

Hello Werners,

โ€‹

I'm looking at the input size of each โ€‹partion at the stage page of the spark UI. As I said I did a noop operation, there is no actual writing. My goal it to have control on the partition size at read which the conf I'm playing with is supposed to do

ashish1
New Contributor III

AQE doesn't affect the read time partitioning but at the shuffle time. It would be better to run optimize on the delta lake which will compact the files to approx 1 GB each, it would provide better read time performance.

Hello Ashish,

โ€‹

I was just wondering as AQE might change the expected behaviour. As stated before, my issue here is to control the partition size at read not to optimise my reading time.

Why it correctly breaks the 180MB file in 2 when 128 is the limit, but not the 108 MB files when the limit is 64โ€‹

-werners-
Esteemed Contributor III

The AQE will only kick in when you are actually doin transformations (shuffle/broadcast) and it will try to optimize the partition size:

https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/aqe#dynamically-coalesce-pa...

AFAIK the read-partitionsize is indeed defined by maxPartitionBytes.

Now, I do recall a topic on stackoverflow where someone asks a similar question.

And there they mention the compression coded also matters.

Chances are you use snappy compression. If that is the case, the partition size might be defined by the row group size of the parquet files.

https://stackoverflow.com/questions/32382352/is-snappy-splittable-or-not-splittable

http://boristyukin.com/is-snappy-compressed-parquet-file-splittable/

Also David Vrba mentions the compression used too:

https://stackoverflow.com/questions/62648621/spark-sql-files-maxpartitionbytes-not-limiting-max-size...

Thanks

You have a point there regarding parquet. Just checked, it does read a separate row group in each task, the thing is that the row groups are unbalanced, so the second taks gets just some KB of data.

Case closed, kudos @Werner Stinckensโ€‹ 

Hi @Pantelis Maroudisโ€‹ 

Thanks for your reply. If you think @Werner Stinckensโ€‹ reply helped you solve this issue, then please mark it as best answer to move it to the top of the thread.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group