Hi,
when writing a DataFrame to parquet using partitionBy(<date column>), the resulting folder structure looks like this:
root
|----------------- day1
|----------------- day2
|----------------- day3
Is it possible to create a structure like to following without explicitely creating the partitioning columns:
root
|----- year1
|-----month1
|----- day1
|----- ....
|----- year2
|----- month
I know that i could achieve it with something like
df
.withColumn("year", year(col("date_col"))).withColumn("month", month(col("date_col"))).withColumn("day", dayofmonth(col("date_col"))).withColumn("hour", hour(col("date_col"))).partitionBy("year","month","day","hour")
taken from (https://stackoverflow.com/questions/52527888/spark-partition-data-writing-by-timestamp),
but when you do like this you also have to use the "virtual" columns when querying from the files in SparkSQL afterwards in order to profit from partition pruning. (In the example, you have to use "WHERE year = 2017 AND month = 2 " - if you use "WHERE date_col >= to_date('2017-02-01') AND date_col <= to_date('2017-03-01')" it doesn`t use partition pruning.
I'm wondering if there is some functionality that i currently just do not know about that can
a) automatically create the nested folder structure
b) also use this for partition pruning while querying
Thank you