cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Parsing 5 GB json file is running long on cluster

Jana
New Contributor III

I was creating delta table from ADLS json input file. but the job was running long while creating delta table from json. Below is my cluster configuration. Is the issue related to cluster config ? Do I need to upgrade the cluster config ?

The cluster was created for non-prod environment and we have complex batch ETL ie.., join, aggregation. Shall i create a small cluster with 400GB memory and 50 cores ? Please advise on this.

Input JSON file size - 5 GB

standard_D3_V2

14 GB memory and 4 cores

worker node - min -2 and max -8

executor type -standard_D3_V2

14GB memory and 4 cores

Note- the cluster was ALLPURPOSE

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

So the databricks docs state the following:

You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.

What this means is that you will not have parallelism while reading the json.

So you have a few options:

  1. do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
  2. use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
  3. if you can: split up the file

View solution in original post

8 REPLIES 8

Anonymous
Not applicable

Hello, @Jana A​! It's nice to meet you! My name is Piper, and I'm a moderator for Databricks. Welcome to the community. Thanks for your question. We'll give your peers a chance to respond and then we'll circle back if we need to.

Thanks in advance for your patience. 🙂

-werners-
Esteemed Contributor III

Have you checked this topic? There might be some ideas there.

Jana
New Contributor III

Note - Df was created with multi line true.The job was ​running long and slowdown the cluster performance. Can you please help me on the issue

Thanks

-werners-
Esteemed Contributor III

with multiline = true, the json is read as a whole and processed as such.

I'd try with a beefier cluster.

Jana
New Contributor III

Yes, the issue was with multiline = true property. Spark is reading as whole. How to resolve the issue? ​

-werners-
Esteemed Contributor III

So the databricks docs state the following:

You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.

What this means is that you will not have parallelism while reading the json.

So you have a few options:

  1. do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
  2. use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
  3. if you can: split up the file

Jana
New Contributor III

Increase​ driver memory or executor memory,? I have changed my cluster executor conf from 14 GB to 28 GB. With the changes, we were able to complete the job without an issue.

Hi @Jana A​ ,

Did @Werner Stinckens​ reply helped you resolve your issue? if yes, could you mark his response as "best response" please?

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.