cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Parquet partitionBy - date column to nested folders

1stcommander
New Contributor II

Hi,

when writing a DataFrame to parquet using partitionBy(<date column>), the resulting folder structure looks like this:

root

|----------------- day1

|----------------- day2

|----------------- day3

Is it possible to create a structure like to following without explicitely creating the partitioning columns:

root

|----- year1

|-----month1

|----- day1

|----- ....

|----- year2

|----- month

I know that i could achieve it with something like

df
.withColumn("year", year(col("date_col"))).withColumn("month", month(col("date_col"))).withColumn("day", dayofmonth(col("date_col"))).withColumn("hour", hour(col("date_col"))).partitionBy("year","month","day","hour")

taken from (https://stackoverflow.com/questions/52527888/spark-partition-data-writing-by-timestamp),

but when you do like this you also have to use the "virtual" columns when querying from the files in SparkSQL afterwards in order to profit from partition pruning. (In the example, you have to use "WHERE year = 2017 AND month = 2 " - if you use "WHERE date_col >= to_date('2017-02-01') AND date_col <= to_date('2017-03-01')" it doesn`t use partition pruning.

I'm wondering if there is some functionality that i currently just do not know about that can

a) automatically create the nested folder structure

b) also use this for partition pruning while querying

Thank you

2 REPLIES 2

1stcommander
New Contributor II

Unfortunately the format has been broken on saving 😞

Here is the structure as-is example:

0693f000007OrnrAAC

Here is the desired structure example:

0693f000007OrnqAAC

Saphira
New Contributor II

Hey @1stcommander​ 

You'll have to create those columns yourself. If it's something you will have to do often you could always write a function. In any case, imho it's not that much work.

Im not sure what your problem is with the partition pruning. It's almost as if you're saying you want the exact thing you said you dont want.

Good luck

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group