Databricks "Preferred" Approaches To Backfilling Single Column In Wide Tables
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2024 06:24 AM
Hi there,
I've tried thinking through this and googling as well, but I'm not sure if there's a better approach that I might be missing. We have *wide* tables with hundreds of columns, and on a day-to-day basis these tables are incrementally filled in "as expected". HOWEVER, the number of columns may grow over time. If this happens for one of our tables, these new columns will have data on a "go-forward" basis, but will obviously not have historical data.
With our setup, we "could" request via API a backfill of ONLY the new column(s) with their respective historical data... But herein lies the problem, how do we handle backfilling ONLY the new column(s) without impacting existing columns' records?
One approach I've thunk up requires architecting pivoting & unpivoting from the start. For example:
- Pull in raw json via API (the data is pulled in *wide* format)
- Unpivot the *wide* json data into a *long* table
- Upsert accordingly (no duplicates)
- Pivot BACK into a *wide* table
This process is more convoluted than I would like, and I'm wondering if via the magic of Delta Tables or some such approach we can backfill ONLY specific columns without affecting existing columns while minimizing the complexity of our process??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2024 12:19 PM
Hi @ChristianRRL ,
If I understand correctly you have an API to get historical data for a column.
If yes, you can use MERGE clause.
You will join the by key columns with the target table to backfill and when there is a match, then you will UPDATE SET target.column_to_backfill = source.column_from_api.

