11-21-2021 11:34 PM
How we can read files from azure blob storage and process parallel in databricks using pyspark.
As of now we are reading all 10 files at a time into dataframe and flattening it.
Thanks & Regards,
Sujata
11-21-2021 11:49 PM
if you use the spark json reader, it will happen in parallel automatically.
Depending on the cluster size, you will be able to read more files in parallel.
Mind that json usually are small files. Spark does not like a lot of small files, so performance may suffer.
Depending on the use case it can be a good idea to do an initial conversion to parquet/delta lake (which will take some time because of multiple small files), and then keep on adding new files to this table.
For your data jobs, you can read the parquet/delta lake which will be a lot faster.
11-22-2021 01:51 AM
can you provide us sample to read read json files parallel from blob. We are reading all files one by one from directory it is taking time to load into data frame
Thank you
11-22-2021 01:54 AM
spark.read.json("/mnt/dbfs/<ENTER PATH OF JSON DIR HERE>/*.json
you first have to mount your blob storage to databricks, I assume that is already done.
https://spark.apache.org/docs/latest/sql-data-sources-json.html
11-22-2021 02:59 AM
Thank you.. We are using mount already..
11-22-2021 11:57 AM
Hi @Sailaja B ,
Check the number of stages and task when you are reading the JSON files. How many do you see? are you JSON files nested? how long does it takes to read a single JSON file?
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now