04-02-2024 02:48 PM
I'm currently facing challenges with optimizing the performance of a Delta Live Table pipeline in Azure Databricks. The task involves ingesting over 10 TB of raw JSON log files from an Azure Data Lake Storage account into a bronze Delta Live Table layer. Notably, the number of JSON files exceeds 500,000. The table structure is quite wide, featuring more than 4000 columns (out of over 10,000 in the source files) and totaling over 12 billion rows.
In this process, I'm not performing any complex transformations, just appending a few columns for partitioning and log tracking. All data is ingested as strings. The Autoloader is configured with file notification mode enabled.
However, the initial data load is projected to take an excessive amount of time, estimated at over 20 days for all files, which is far beyond acceptable limits. Below is a snippet of my current SQL notebook setup, and I'm open to transitioning to PySpark if it offers a better solution to this bottleneck.
Here is the code and pipeline settings that I'm using:
CREATE OR REFRESH STREAMING LIVE TABLE `periodic_raw_poc`
PARTITIONED BY (device_id)
COMMENT "Ingest raw JSON data into a Delta Live Table with a predefined schema."
TBLPROPERTIES (
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5',
'delta.columnMapping.mode' = 'name'
)
AS SELECT
regexp_extract(_metadata.file_path, '(\\w+_\\w+_\\w+_\\w+)', 2) AS log_name,
regexp_extract(_metadata.file_path, '(\\w+)_(\\w+)_(\\w+)_(\\w+)', 1) AS device_id,
*
FROM cloud_files(
"abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/RUNFILES/*/*.log.periodic.json",
"json",
map(
"subscriptionId", "****",
"tenantId", "****",
"clientId", "****",
"clientSecret", "****",
"resourceGroup", "****",
"useNotifications", "true",
"fetchParallelism", "32",
"schema", "`_timestamp` STRING,... (rest of the 4000 columns as STRING Type)
)
{
"id": "****",
"pipeline_type": "WORKSPACE",
"clusters": [
{
"label": "default",
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS4_v2",
"autoscale": {
"min_workers": 1,
"max_workers": 6,
"mode": "ENHANCED"
}
}
],
"development": true,
"continuous": false,
"channel": "PREVIEW",
"photon": true,
"libraries": [
{
"notebook": {
"path": "SQL notebook path"
}
}
],
"name": "****",
"edition": "ADVANCED",
"catalog": "****",
"target": "****",
"data_sampling": false
}
Despite these settings, the process is much slower than anticipated. I'm looking for insights or optimization strategies from those who have tackled similar challenges with Delta Live Tables, especially concerning large-scale data ingestion.
04-05-2024 03:16 AM
Hi @brian_zavareh, Optimizing the performance of a Delta Live Table pipeline in Azure Databricks for ingesting large volumes of raw JSON log files is crucial.
Let’s explore some strategies to improve the data load process:
Partitioning and Clustering:
device_id
) can significantly speed up queries and reduce data shuffling during reads.CLUSTERED BY
clause when creating the table.File Size and Coalescing:
OPTIMIZE
command periodically to compact small files into larger ones. Set spark.databricks.delta.optimize.repartition.enabled=true
to use repartitioning instead of coales...1.Change Data Capture (CDC):
APPLY CHANGES INTO
API. This allows you to efficiently capture continually arriving data, whether in SQL or Python2.Autoloader Configuration:
fetchParallelism
based on your cluster resources and data volume.Streaming vs. Batch:
Databricks Jobs:
Remember that Delta Live Tables abstracts away operational complexities, allowing you to focus on writing queries. By following these best practices, you can significantly improve the performance of your data ingestion pipeline. Good luck! 🚀
04-05-2024 03:16 AM
Hi @brian_zavareh, Optimizing the performance of a Delta Live Table pipeline in Azure Databricks for ingesting large volumes of raw JSON log files is crucial.
Let’s explore some strategies to improve the data load process:
Partitioning and Clustering:
device_id
) can significantly speed up queries and reduce data shuffling during reads.CLUSTERED BY
clause when creating the table.File Size and Coalescing:
OPTIMIZE
command periodically to compact small files into larger ones. Set spark.databricks.delta.optimize.repartition.enabled=true
to use repartitioning instead of coales...1.Change Data Capture (CDC):
APPLY CHANGES INTO
API. This allows you to efficiently capture continually arriving data, whether in SQL or Python2.Autoloader Configuration:
fetchParallelism
based on your cluster resources and data volume.Streaming vs. Batch:
Databricks Jobs:
Remember that Delta Live Tables abstracts away operational complexities, allowing you to focus on writing queries. By following these best practices, you can significantly improve the performance of your data ingestion pipeline. Good luck! 🚀
04-06-2024 01:25 PM
Hi @Kaniz,
Thank you for the insightful suggestions on optimizing our Delta Live Table pipeline. I'm excited to apply your recommendations, especially around partitioning, clustering, and Autoloader configurations. I'll implement these changes and reach out if I have any more questions.
Cheers
04-06-2024 06:46 PM
In addition to Kanzi suggestion. I noticed that you are using (Standard_DS3_v2) this worker might be too small for this job. If you can afford increasing to a larger worker, you should consider that.
04-09-2024 09:00 AM
Thanks for the heads up @standup1. I agree I was using it for the purpose of POC and I will select bigger clusters for the main job. Do you know any good practice on how to select the number of clusters and their sizes for both worker and driver?
04-09-2024 09:14 AM
Hey @brian_zavareh , see this document. I hope this can help.
https://learn.microsoft.com/en-us/azure/databricks/compute/cluster-config-best-practices
Just keep in mind that there's some extra cost from Azure VM side, check your Azure Cost Analysis for more details. Use tags when you create your pipeline, so it will be easy for you to drill down to see that specific pipeline cost.
Excited to expand your horizons with us? Click here to Register and begin your journey to success!
Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!