Thursday
Hi all,
I am new to Databricks and delta table. Recently I am researching on data archiving as we have data retention policy within the company.
I have studied the documentation (Archival support in Databricks | Databricks on AWS) and I am exploring the feasibility of applying such feature to my case.
Let say I have a requirement to archive the data after 5 years and all of the table are delta tables, I would like to ask the following questions:
1. How should I configure in Databricks?
2. Can I specify a datetime column in the table as archive reference duration? Or it simply count the duration from the time the record being inserted to the table?
3. Is there any difference in terms of archiving between managed table and external table?
3. Is there any difference in implementation if the environment is UC enabled?
4. As per my understanding, if we enable archive support but do not configure any lifecycle management on storage side (we are using Azure storage account), there is no cost saving on storage side as Databricks simply ignore the data and the underlining storage remains unchanged. May I know how we can configure lifecycle management on Azure storage account so it can algin with the archival support setting on Databricks side?
Highly appreciate if someone can help me with the above questions. Thanks in advance.
Friday - last edited Friday
Hi @Brianben ,
Source: https://docs.databricks.com/en/optimizations/archive-delta.html
To enable archival support in Databricks, you need to set the archival interval through a table property. For example:
ALTER TABLE <table_name> SET TBLPROPERTIES (delta.timeUntilArchived = '1825 days');
In this case, delta.timeUntilArchived = '1825 days'
indicates the data should be archived after five years (assuming 365 days per year). Please note that Databricks archival support depends on corresponding cloud-side lifecycle management policies (such as those provided by Azure, AWS, or GCP).
Archiving in Databricks currently relies on the file creation time recorded in the Delta transaction log. There isn’t a built-in mechanism to use a custom datetime column as the trigger for archiving. In other words, Databricks will consider the age of data based on when it was written to the Delta log, not a separate timestamp column.
Based on current documentation, there are no differences in how archival works for managed vs. external Delta tables. The archival process is driven by the same Delta transaction log mechanism and cloud lifecycle management policies, regardless of whether your table is managed or external.
The existing documentation does not indicate a difference in enabling or configuring archival support for UC-enabled environments. Archival support relies on the Delta transaction log and compatible compute environments, which should operate similarly in both UC and non-UC settings.
If you only enable Databricks archival support without configuring lifecycle management on Azure, the data is merely marked as archived within Databricks but remains in the same storage tier on Azure, resulting in no cost reduction.
To align Databricks archival with cost-saving measures:
delta.timeUntilArchived
.For detailed guidance on creating and managing lifecycle policies in Azure Storage, please refer to:
https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview
Please let us know if this raises further questions, or does answer everything.
Friday - last edited Friday
Hi @Brianben ,
Source: https://docs.databricks.com/en/optimizations/archive-delta.html
To enable archival support in Databricks, you need to set the archival interval through a table property. For example:
ALTER TABLE <table_name> SET TBLPROPERTIES (delta.timeUntilArchived = '1825 days');
In this case, delta.timeUntilArchived = '1825 days'
indicates the data should be archived after five years (assuming 365 days per year). Please note that Databricks archival support depends on corresponding cloud-side lifecycle management policies (such as those provided by Azure, AWS, or GCP).
Archiving in Databricks currently relies on the file creation time recorded in the Delta transaction log. There isn’t a built-in mechanism to use a custom datetime column as the trigger for archiving. In other words, Databricks will consider the age of data based on when it was written to the Delta log, not a separate timestamp column.
Based on current documentation, there are no differences in how archival works for managed vs. external Delta tables. The archival process is driven by the same Delta transaction log mechanism and cloud lifecycle management policies, regardless of whether your table is managed or external.
The existing documentation does not indicate a difference in enabling or configuring archival support for UC-enabled environments. Archival support relies on the Delta transaction log and compatible compute environments, which should operate similarly in both UC and non-UC settings.
If you only enable Databricks archival support without configuring lifecycle management on Azure, the data is merely marked as archived within Databricks but remains in the same storage tier on Azure, resulting in no cost reduction.
To align Databricks archival with cost-saving measures:
delta.timeUntilArchived
.For detailed guidance on creating and managing lifecycle policies in Azure Storage, please refer to:
https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview
Please let us know if this raises further questions, or does answer everything.
yesterday
@VZLA Thank you very much for the detailed explanation. No further questions, thanks again.
yesterday
@VZLA Just one further question, do you know is it possible for a parquet file under a delta table consist of two different data written date in behind? As I am worried that if I configure Azure lifecycle policy to move the data file after 5 years but it actually consist of some data that not yet past 5 years criteria, some data would be move away earlier than expect. Thanks.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group