cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Parallel processing of json files in databricks pyspark

AzureDatabricks
New Contributor III

How we can read files from azure blob storage and process parallel in databricks using pyspark.

As of now we are reading all 10 files at a time into dataframe and flattening it.

Thanks & Regards,

Sujata

5 REPLIES 5

-werners-
Esteemed Contributor III

if you use the spark json reader, it will happen in parallel automatically.

Depending on the cluster size, you will be able to read more files in parallel.

Mind that json usually are small files. Spark does not like a lot of small files, so performance may suffer.

Depending on the use case it can be a good idea to do an initial conversion to parquet/delta lake (which will take some time because of multiple small files), and then keep on adding new files to this table.

For your data jobs, you can read the parquet/delta lake which will be a lot faster.

AzureDatabricks
New Contributor III

can you provide us sample to read read json files parallel from blob. We are reading all files one by one from directory it is taking time to load into data frame

Thank you

-werners-
Esteemed Contributor III

spark.read.json("/mnt/dbfs/<ENTER PATH OF JSON DIR HERE>/*.json

you first have to mount your blob storage to databricks, I assume that is already done.

https://spark.apache.org/docs/latest/sql-data-sources-json.html

SailajaB
Valued Contributor III

Thank you.. We are using mount already..

Hi @Sailaja Bโ€‹ ,

Check the number of stages and task when you are reading the JSON files. How many do you see? are you JSON files nested? how long does it takes to read a single JSON file?