- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ02-15-2022 09:26 AM
I was creating delta table from ADLS json input file. but the job was running long while creating delta table from json. Below is my cluster configuration. Is the issue related to cluster config ? Do I need to upgrade the cluster config ?
The cluster was created for non-prod environment and we have complex batch ETL ie.., join, aggregation. Shall i create a small cluster with 400GB memory and 50 cores ? Please advise on this.
Input JSON file size - 5 GB
standard_D3_V2
14 GB memory and 4 cores
worker node - min -2 and max -8
executor type -standard_D3_V2
14GB memory and 4 cores
Note- the cluster was ALLPURPOSE
- Labels:
-
Cluster
-
Delta table
-
File
-
JSON
-
Performance Issues
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-03-2022 10:28 AM
So the databricks docs state the following:
You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.
What this means is that you will not have parallelism while reading the json.
So you have a few options:
- do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
- use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
- if you can: split up the file
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ02-16-2022 08:29 AM
Hello, @Jana Aโ! It's nice to meet you! My name is Piper, and I'm a moderator for Databricks. Welcome to the community. Thanks for your question. We'll give your peers a chance to respond and then we'll circle back if we need to.
Thanks in advance for your patience. ๐
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ02-16-2022 11:23 PM
Have you checked this topic? There might be some ideas there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-01-2022 12:33 AM
Note - Df was created with multi line true.The job was โrunning long and slowdown the cluster performance. Can you please help me on the issue
โ
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-01-2022 12:48 AM
with multiline = true, the json is read as a whole and processed as such.
I'd try with a beefier cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-03-2022 09:55 AM
Yes, the issue was with multiline = true property. Spark is reading as whole. How to resolve the issue? โ
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-03-2022 10:28 AM
So the databricks docs state the following:
You can read JSON files in single-line or multi-line mode. In single-line mode, a file can be split into many parts and read in parallel. In multi-line mode, a file is loaded as a whole entity and cannot be split.
What this means is that you will not have parallelism while reading the json.
So you have a few options:
- do not use multiline. This is only possible if your json file contains one json object per line. You can try to see if it works
- use a larger cluster. The driver will read the json file so the driver needs enough memory. The number of cores is less important.
- if you can: split up the file
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ10-28-2024 10:47 AM
Splitting the file was the easiest solution for me. I was trying to load a 3GB JSON file into a delta table. I'm working on a cluster with 128GB memory. The resulting error message does not help identify the issue. I split the file into three 1GB files. Worked like a charm
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-04-2022 09:40 AM
Increaseโ driver memory or executor memory,? I have changed my cluster executor conf from 14 GB to 28 GB. With the changes, we were able to complete the job without an issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ03-07-2022 03:14 PM
Hi @Jana Aโ ,
Did @Werner Stinckensโ reply helped you resolve your issue? if yes, could you mark his response as "best response" please?