Small json files issue . taking 2 hours to read 3000 files
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2024 11:39 PM
Hello I am trying to read 3000 json files which has only one records. It is taking 2 hours to read all the files . How can I perform this operation faster pls suggest.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2024 12:05 AM
This is the code ---df1 = spark.read.format("json").options(inferSchema="true", multiLine="true").load(file1)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2024 03:30 AM
Hi @Subhasis
You can start off by specyfying schema upfront instead of using infer schema option. But to be honest, it is classical "small file problem". The best approach you can take is to compact those small files into larges ones.
Or you can read all them and save them as a parquet files with a proper partition size.
Take a look at below threads for inspiration:
Big data [Spark] and its small files problem – Garren's [Big] Data Blog

