โ02-13-2022 07:59 AM
Imagine the following setup:
I have log files stored as JSON files partitioned by year, month, day and hour in physical folders:
"""
/logs
|-- year=2020
|-- year=2021
`-- year=2022
|-- month=01
`-- month=02
|-- day=01
|-- day=...
`-- day=13
|-- hour=0000
|-- hour=...
`-- hour=0900
|-- log000001.json
|-- <many files>
`-- log000133.json
""""
Spark supports partition discovery for folder structures like this ("All built-in file sources (including Text/CSV/JSON/ORC/Parquet) are able to discover and infer partitioning information automatically" https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery).
However, in contrast to PARQUET files, I found that Spark does not uses this meta information for partition pruning / partition elimination when reading JSON files.
In my use case I am only interested in logs from a specific time window (see filter๐
(spark
.read
.format('json')
.load('/logs')
.filter('year=2022 AND month=02 AND day=13 AND hour=0900')
)
I'd expect that Spark would be able to apply the filters on the partition columns "early" and only scan folders matching the filters (e.g. Spark would not need to scan folders and read files under '/logs/year=2020').
However, in practice the execution of my query takes a lot of time. It looks to me as if Spark scans first the whole filesystem starting at '/logs' reads all files and then applies the filters (on the already read data). Due to the nested folder structure and the large number of folders/files this is very expensive.
Apparently Spark does not push down the filter (applies partition pruning / partition elimination).
For me it is weird that the behavior for processing JSON files differs from Parquet.
Is this as-designed or a bug?
For now, I ended up implementing partition pruning myself in a pre-processing step by using dbutils.fs.ls for scanning the "right" folders iteratively and assembling an explicit file list that I then pass on to the spark read command.
โ03-23-2022 10:14 AM
โ03-23-2022 10:18 PM
Hi @Martin B.โ , Thank you for sharing this. This will be taken care of for sure๐ . Let me get back to you soon.
โ04-15-2022 03:42 AM
Hi @Kaniz Fatmaโ ,
Any updates on this one?
โ04-17-2022 09:47 PM
Hi @Martin B.โ , Did you try again?
Please try to share your feedback here.
โ04-18-2022 05:49 AM
Hi @Kaniz Fatmaโ ,
Yes I did. This time no more error is displayed as before.
But following your link https://databricks.com/feedback I end up on the landing page in my community workspace; I had expected a feedback portal.
I my workspace, under "help" there is another "feedback" button:
But this is just a mailto- link for the address feedback@databricks.com.
Is this the intended way to make feature requests?
โ04-18-2022 08:25 AM
Hi @Martin B.โ , The Ideas Portal lets you influence the Databricks product roadmap by providing feedback directly to the product team. Use the Ideas Portal to:
โ04-18-2022 01:02 PM
Hi @Kaniz Fatmaโ ,
When I try to access https://ideas.databricks.com/ the situation just as I described a month ago: after the login an error is displayed:
Last month, I had the understanding that you are going to check, why that is the case.
Are you positive that Databricks community edition (=free) users are allowed to access the ideas portal?
โ04-18-2022 11:24 PM
Hi @Martin B.โ , You need a subscription for submitting an idea.
Databricks community edition (=free) users are not allowed to access the ideas portal.
โ04-21-2022 10:47 AM
Hi @Kaniz Fatmaโ ,
That's unfortunate; but thanks for the answer.
โ04-21-2022 11:30 PM
Hi @Martin B.โ , Databricks, Community Edition users, can get more capacity and gain production-grade functionalities by upgrading their subscription to the complete Databricks platform. To upgrade, sign-up for a 14-day free trial or contact us.
The complete Databricks platform offers production-grade functionality, such as an unlimited number of clusters that quickly scale up or down, a job launcher, collaboration, advanced security controls, and expert support. It helps users process data at scale or build Apache Sparkโข applications in a team setting.
โ03-04-2022 07:51 AM
@Kaniz Fatmaโ could you maybe involve a Databricks expert?
โ03-04-2022 07:56 AM
Hi @Martin B.โ , Thank you for looping me into this conversation - I like the ideas I am seeing. I'll get back to you asap with an apt response after connecting with some of Databricks' experts on Spark.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group