cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Optimizing Delta Live Table Ingestion Performance for Large JSON Datasets

brian_zavareh
New Contributor III

I'm currently facing challenges with optimizing the performance of a Delta Live Table pipeline in Azure Databricks. The task involves ingesting over 10 TB of raw JSON log files from an Azure Data Lake Storage account into a bronze Delta Live Table layer. Notably, the number of JSON files exceeds 500,000. The table structure is quite wide, featuring more than 4000 columns (out of over 10,000 in the source files) and totaling over 12 billion rows.

In this process, I'm not performing any complex transformations, just appending a few columns for partitioning and log tracking. All data is ingested as strings. The Autoloader is configured with file notification mode enabled.

However, the initial data load is projected to take an excessive amount of time, estimated at over 20 days for all files, which is far beyond acceptable limits. Below is a snippet of my current SQL notebook setup, and I'm open to transitioning to PySpark if it offers a better solution to this bottleneck.

Here is the code and pipeline settings that I'm using:

 

 

CREATE OR REFRESH STREAMING LIVE TABLE `periodic_raw_poc`
PARTITIONED BY (device_id)
COMMENT "Ingest raw JSON data into a Delta Live Table with a predefined schema."
TBLPROPERTIES (
    'delta.minReaderVersion' = '2',
    'delta.minWriterVersion' = '5',
    'delta.columnMapping.mode' = 'name'
)
AS SELECT 
regexp_extract(_metadata.file_path, '(\\w+_\\w+_\\w+_\\w+)', 2) AS log_name,
regexp_extract(_metadata.file_path, '(\\w+)_(\\w+)_(\\w+)_(\\w+)', 1) AS device_id,
*
FROM cloud_files(
  "abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/RUNFILES/*/*.log.periodic.json",
  "json",
  map(
  "subscriptionId", "****",
  "tenantId", "****",
  "clientId", "****",
  "clientSecret", "****",
  "resourceGroup", "****",
  "useNotifications", "true",
  "fetchParallelism", "32",
  "schema", "`_timestamp` STRING,... (rest of the 4000 columns as STRING Type)
)
{
    "id": "****",
    "pipeline_type": "WORKSPACE",
    "clusters": [
        {
            "label": "default",
            "node_type_id": "Standard_DS3_v2",
            "driver_node_type_id": "Standard_DS4_v2",
            "autoscale": {
                "min_workers": 1,
                "max_workers": 6,
                "mode": "ENHANCED"
            }
        }
    ],
    "development": true,
    "continuous": false,
    "channel": "PREVIEW",
    "photon": true,
    "libraries": [
        {
            "notebook": {
                "path": "SQL notebook path"
            }
        }
    ],
    "name": "****",
    "edition": "ADVANCED",
    "catalog": "****",
    "target": "****",
    "data_sampling": false
}

 

 

 

Despite these settings, the process is much slower than anticipated. I'm looking for insights or optimization strategies from those who have tackled similar challenges with Delta Live Tables, especially concerning large-scale data ingestion. 

4 REPLIES 4

Hi @Retired_mod,

Thank you for the insightful suggestions on optimizing our Delta Live Table pipeline. I'm excited to apply your recommendations, especially around partitioning, clustering, and Autoloader configurations. I'll implement these changes and reach out if I have any more questions.

Cheers

standup1
Contributor

In addition to Kanzi suggestion. I noticed that you are using (Standard_DS3_v2) this worker might be too small for this job. If you can afford increasing to a larger worker, you should consider that. 

 

Thanks for the heads up @standup1. I agree I was using it for the purpose of POC and I will select bigger clusters for the main job. Do you know any good practice on how to select the number of clusters and their sizes for both worker and driver?

standup1
Contributor

Hey @brian_zavareh , see this document. I hope this can help.

https://learn.microsoft.com/en-us/azure/databricks/compute/cluster-config-best-practices

Just keep in mind that there's some extra cost from Azure VM side, check your Azure Cost Analysis for more details. Use tags when you create your pipeline, so it will be easy for you to drill down to see that specific pipeline cost.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group