cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Parsing 5 GB json file is running long on cluster

Jana
New Contributor III

I was creating delta table from ADLS json input file. but the job was running long while creating delta table from json. Below is my cluster configuration. Is the issue related to cluster config ? Do I need to upgrade the cluster config ?

The cluster was created for non-prod environment and we have complex batch ETL ie.., join, aggregation. Shall i create a small cluster with 400GB memory and 50 cores ? Please advise on this.

Input JSON file size - 5 GB

standard_D3_V2

14 GB memory and 4 cores

worker node - min -2 and max -8

executor type -standard_D3_V2

14GB memory and 4 cores

Note- the cluster was ALLPURPOSE

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

So the databricks docs state the following:

You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.

What this means is that you will not have parallelism while reading the json.

So you have a few options:

  1. do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
  2. use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
  3. if you can: split up the file

View solution in original post

9 REPLIES 9

Anonymous
Not applicable

Hello, @Jana Aโ€‹! It's nice to meet you! My name is Piper, and I'm a moderator for Databricks. Welcome to the community. Thanks for your question. We'll give your peers a chance to respond and then we'll circle back if we need to.

Thanks in advance for your patience. ๐Ÿ™‚

-werners-
Esteemed Contributor III

Have you checked this topic? There might be some ideas there.

Jana
New Contributor III

Note - Df was created with multi line true.The job was โ€‹running long and slowdown the cluster performance. Can you please help me on the issue

โ€‹

Thanks

-werners-
Esteemed Contributor III

with multiline = true, the json is read as a whole and processed as such.

I'd try with a beefier cluster.

Jana
New Contributor III

Yes, the issue was with multiline = true property. Spark is reading as whole. How to resolve the issue? โ€‹

-werners-
Esteemed Contributor III

So the databricks docs state the following:

You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.

What this means is that you will not have parallelism while reading the json.

So you have a few options:

  1. do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
  2. use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
  3. if you can: split up the file

AlexG
New Contributor III

Splitting the file was the easiest solution for me. I was trying to load a 3GB JSON file into a delta table. I'm working on a cluster with 128GB memory. The resulting error message does not help identify the issue. I split the file into three 1GB files. Worked like a charm

Jana
New Contributor III

Increaseโ€‹ driver memory or executor memory,? I have changed my cluster executor conf from 14 GB to 28 GB. With the changes, we were able to complete the job without an issue.

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Jana Aโ€‹ ,

Did @Werner Stinckensโ€‹ reply helped you resolve your issue? if yes, could you mark his response as "best response" please?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group