cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Parquet partitionBy - date column to nested folders

1stcommander
New Contributor II

Hi,

when writing a DataFrame to parquet using partitionBy(<date column>), the resulting folder structure looks like this:

root

|----------------- day1

|----------------- day2

|----------------- day3

Is it possible to create a structure like to following without explicitely creating the partitioning columns:

root

|----- year1

|-----month1

|----- day1

|----- ....

|----- year2

|----- month

I know that i could achieve it with something like

df
.withColumn("year", year(col("date_col"))).withColumn("month", month(col("date_col"))).withColumn("day", dayofmonth(col("date_col"))).withColumn("hour", hour(col("date_col"))).partitionBy("year","month","day","hour")

taken from (https://stackoverflow.com/questions/52527888/spark-partition-data-writing-by-timestamp),

but when you do like this you also have to use the "virtual" columns when querying from the files in SparkSQL afterwards in order to profit from partition pruning. (In the example, you have to use "WHERE year = 2017 AND month = 2 " - if you use "WHERE date_col >= to_date('2017-02-01') AND date_col <= to_date('2017-03-01')" it doesn`t use partition pruning.

I'm wondering if there is some functionality that i currently just do not know about that can

a) automatically create the nested folder structure

b) also use this for partition pruning while querying

Thank you

2 REPLIES 2

1stcommander
New Contributor II

Unfortunately the format has been broken on saving ๐Ÿ˜ž

Here is the structure as-is example:

0693f000007OrnrAAC

Here is the desired structure example:

0693f000007OrnqAAC

Saphira
New Contributor II

Hey @1stcommanderโ€‹ 

You'll have to create those columns yourself. If it's something you will have to do often you could always write a function. In any case, imho it's not that much work.

Im not sure what your problem is with the partition pruning. It's almost as if you're saying you want the exact thing you said you dont want.

Good luck

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.