I'm currently facing challenges with optimizing the performance of a Delta Live Table pipeline in Azure Databricks. The task involves ingesting over 10 TB of raw JSON log files from an Azure Data Lake Storage account into a bronze Delta Live Table layer. Notably, the number of JSON files exceeds 500,000. The table structure is quite wide, featuring more than 4000 columns (out of over 10,000 in the source files) and totaling over 12 billion rows.
In this process, I'm not performing any complex transformations, just appending a few columns for partitioning and log tracking. All data is ingested as strings. The Autoloader is configured with file notification mode enabled.
However, the initial data load is projected to take an excessive amount of time, estimated at over 20 days for all files, which is far beyond acceptable limits. Below is a snippet of my current SQL notebook setup, and I'm open to transitioning to PySpark if it offers a better solution to this bottleneck.
Here is the code and pipeline settings that I'm using:
CREATE OR REFRESH STREAMING LIVE TABLE `periodic_raw_poc`
PARTITIONED BY (device_id)
COMMENT "Ingest raw JSON data into a Delta Live Table with a predefined schema."
TBLPROPERTIES (
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5',
'delta.columnMapping.mode' = 'name'
)
AS SELECT
regexp_extract(_metadata.file_path, '(\\w+_\\w+_\\w+_\\w+)', 2) AS log_name,
regexp_extract(_metadata.file_path, '(\\w+)_(\\w+)_(\\w+)_(\\w+)', 1) AS device_id,
*
FROM cloud_files(
"abfss://<container-name>@<storage-account-name>.dfs.core.windows.net/RUNFILES/*/*.log.periodic.json",
"json",
map(
"subscriptionId", "****",
"tenantId", "****",
"clientId", "****",
"clientSecret", "****",
"resourceGroup", "****",
"useNotifications", "true",
"fetchParallelism", "32",
"schema", "`_timestamp` STRING,... (rest of the 4000 columns as STRING Type)
)
{
"id": "****",
"pipeline_type": "WORKSPACE",
"clusters": [
{
"label": "default",
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS4_v2",
"autoscale": {
"min_workers": 1,
"max_workers": 6,
"mode": "ENHANCED"
}
}
],
"development": true,
"continuous": false,
"channel": "PREVIEW",
"photon": true,
"libraries": [
{
"notebook": {
"path": "SQL notebook path"
}
}
],
"name": "****",
"edition": "ADVANCED",
"catalog": "****",
"target": "****",
"data_sampling": false
}
Despite these settings, the process is much slower than anticipated. I'm looking for insights or optimization strategies from those who have tackled similar challenges with Delta Live Tables, especially concerning large-scale data ingestion.