Imagine the following setup:
I have log files stored as JSON files partitioned by year, month, day and hour in physical folders:
"""
/logs
|-- year=2020
|-- year=2021
`-- year=2022
|-- month=01
`-- month=02
|-- day=01
|-- day=...
`-- day=13
|-- hour=0000
|-- hour=...
`-- hour=0900
|-- log000001.json
|-- <many files>
`-- log000133.json
""""
Spark supports partition discovery for folder structures like this ("All built-in file sources (including Text/CSV/JSON/ORC/Parquet) are able to discover and infer partitioning information automatically" https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery).
However, in contrast to PARQUET files, I found that Spark does not uses this meta information for partition pruning / partition elimination when reading JSON files.
In my use case I am only interested in logs from a specific time window (see filter๐
(spark
.read
.format('json')
.load('/logs')
.filter('year=2022 AND month=02 AND day=13 AND hour=0900')
)
I'd expect that Spark would be able to apply the filters on the partition columns "early" and only scan folders matching the filters (e.g. Spark would not need to scan folders and read files under '/logs/year=2020').
However, in practice the execution of my query takes a lot of time. It looks to me as if Spark scans first the whole filesystem starting at '/logs' reads all files and then applies the filters (on the already read data). Due to the nested folder structure and the large number of folders/files this is very expensive.
Apparently Spark does not push down the filter (applies partition pruning / partition elimination).
For me it is weird that the behavior for processing JSON files differs from Parquet.
Is this as-designed or a bug?
For now, I ended up implementing partition pruning myself in a pre-processing step by using dbutils.fs.ls for scanning the "right" folders iteratively and assembling an explicit file list that I then pass on to the spark read command.