cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Optimizing Delta Live Table Ingestion Performance for Large JSON Datasets

brian_zavareh
New Contributor III

I'm currently facing challenges with optimizing the performance of a Delta Live Table pipeline in Azure Databricks. The task involves ingesting over 10 TB of raw JSON log files from an Azure Data Lake Storage account into a bronze Delta Live Table layer. Notably, the number of JSON files exceeds 500,000. The table structure is quite wide, featuring more than 4000 columns (out of over 10,000 in the source files) and totaling over 12 billion rows.

In this process, I'm not performing any complex transformations, just appending a few columns for partitioning and log tracking. All data is ingested as strings. The Autoloader is configured with file notification mode enabled.

However, the initial data load is projected to take an excessive amount of time, estimated at over 20 days for all files, which is far beyond acceptable limits. Below is a snippet of my current SQL notebook setup, and I'm open to transitioning to PySpark if it offers a better solution to this bottleneck.

Here is the code and pipeline settings that I'm using:

 

 

CREATE OR REFRESH STREAMING LIVE TABLE `periodic_raw_poc`
PARTITIONED BY (device_id)
COMMENT "Ingest raw JSON data into a Delta Live Table with a predefined schema."
TBLPROPERTIES (
    'delta.minReaderVersion' = '2',
    'delta.minWriterVersion' = '5',
    'delta.columnMapping.mode' = 'name'
)
AS SELECT 
regexp_extract(_metadata.file_path, '(\\w+_\\w+_\\w+_\\w+)', 2) AS log_name,
regexp_extract(_metadata.file_path, '(\\w+)_(\\w+)_(\\w+)_(\\w+)', 1) AS device_id,
*
FROM cloud_files(
  "abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/RUNFILES/*/*.log.periodic.json",
  "json",
  map(
  "subscriptionId", "****",
  "tenantId", "****",
  "clientId", "****",
  "clientSecret", "****",
  "resourceGroup", "****",
  "useNotifications", "true",
  "fetchParallelism", "32",
  "schema", "`_timestamp` STRING,... (rest of the 4000 columns as STRING Type)
)
{
    "id": "****",
    "pipeline_type": "WORKSPACE",
    "clusters": [
        {
            "label": "default",
            "node_type_id": "Standard_DS3_v2",
            "driver_node_type_id": "Standard_DS4_v2",
            "autoscale": {
                "min_workers": 1,
                "max_workers": 6,
                "mode": "ENHANCED"
            }
        }
    ],
    "development": true,
    "continuous": false,
    "channel": "PREVIEW",
    "photon": true,
    "libraries": [
        {
            "notebook": {
                "path": "SQL notebook path"
            }
        }
    ],
    "name": "****",
    "edition": "ADVANCED",
    "catalog": "****",
    "target": "****",
    "data_sampling": false
}

 

 

 

Despite these settings, the process is much slower than anticipated. I'm looking for insights or optimization strategies from those who have tackled similar challenges with Delta Live Tables, especially concerning large-scale data ingestion. 

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Hi @brian_zavarehOptimizing the performance of a Delta Live Table pipeline in Azure Databricks for ingesting large volumes of raw JSON log files is crucial.

Let’s explore some strategies to improve the data load process:

  1. Partitioning and Clustering:

    • Ensure that your table is properly partitioned and clustered. Partitioning by relevant columns (such as device_id) can significantly speed up queries and reduce data shuffling during reads.
    • Clustering the data based on frequently accessed columns can further enhance performance. Consider using the CLUSTERED BY clause when creating the table.
  2. File Size and Coalescing:

  3. Change Data Capture (CDC):

  4. Autoloader Configuration:

    • Since you’re using the Autoloader, ensure that it’s configured optimally. Consider adjusting parameters like fetchParallelism based on your cluster resources and data volume.
    • Verify that the file notification mode is correctly set to “true” for efficient file discovery.
  5. Streaming vs. Batch:

    • Evaluate whether streaming or batch processing is more suitable for your use case. Streaming tables are recommended for most ingestion scenarios.
    • If you choose batch processing, consider using triggered mode for better control over execution and cost management.
  6. Databricks Jobs:

    • Use Databricks Jobs to schedule recurring pipeline runs. The new ‘Schedule Pipeline’ button in the DLT UI simplifies this process.
    • Monitor job history and configure email notifications for better visibility.

Remember that Delta Live Tables abstracts away operational complexities, allowing you to focus on writing queries. By following these best practices, you can significantly improve the performance of your data ingestion pipeline. Good luck! 🚀

 

View solution in original post

5 REPLIES 5

Kaniz
Community Manager
Community Manager

Hi @brian_zavarehOptimizing the performance of a Delta Live Table pipeline in Azure Databricks for ingesting large volumes of raw JSON log files is crucial.

Let’s explore some strategies to improve the data load process:

  1. Partitioning and Clustering:

    • Ensure that your table is properly partitioned and clustered. Partitioning by relevant columns (such as device_id) can significantly speed up queries and reduce data shuffling during reads.
    • Clustering the data based on frequently accessed columns can further enhance performance. Consider using the CLUSTERED BY clause when creating the table.
  2. File Size and Coalescing:

  3. Change Data Capture (CDC):

  4. Autoloader Configuration:

    • Since you’re using the Autoloader, ensure that it’s configured optimally. Consider adjusting parameters like fetchParallelism based on your cluster resources and data volume.
    • Verify that the file notification mode is correctly set to “true” for efficient file discovery.
  5. Streaming vs. Batch:

    • Evaluate whether streaming or batch processing is more suitable for your use case. Streaming tables are recommended for most ingestion scenarios.
    • If you choose batch processing, consider using triggered mode for better control over execution and cost management.
  6. Databricks Jobs:

    • Use Databricks Jobs to schedule recurring pipeline runs. The new ‘Schedule Pipeline’ button in the DLT UI simplifies this process.
    • Monitor job history and configure email notifications for better visibility.

Remember that Delta Live Tables abstracts away operational complexities, allowing you to focus on writing queries. By following these best practices, you can significantly improve the performance of your data ingestion pipeline. Good luck! 🚀

 

brian_zavareh
New Contributor III

Hi @Kaniz,

Thank you for the insightful suggestions on optimizing our Delta Live Table pipeline. I'm excited to apply your recommendations, especially around partitioning, clustering, and Autoloader configurations. I'll implement these changes and reach out if I have any more questions.

Cheers

standup1
New Contributor III

In addition to Kanzi suggestion. I noticed that you are using (Standard_DS3_v2) this worker might be too small for this job. If you can afford increasing to a larger worker, you should consider that. 

 

Thanks for the heads up @standup1. I agree I was using it for the purpose of POC and I will select bigger clusters for the main job. Do you know any good practice on how to select the number of clusters and their sizes for both worker and driver?

standup1
New Contributor III

Hey @brian_zavareh , see this document. I hope this can help.

https://learn.microsoft.com/en-us/azure/databricks/compute/cluster-config-best-practices

Just keep in mind that there's some extra cost from Azure VM side, check your Azure Cost Analysis for more details. Use tags when you create your pipeline, so it will be easy for you to drill down to see that specific pipeline cost.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.