@Ammar Ammar :
The error message you're seeing suggests that the Delta Lake transaction log for the common model's test table has been truncated or deleted, either manually or due to the retention policies set in your cluster. This can happen if the log gets too big or if it's been around for too long.
To fix this issue, you can try the following steps:
- Confirm that the Delta Log has been truncated or deleted. You can do this by checking the cluster logs or running a query against the common model's test table to see if it fails with the same error message. If it has been deleted, you will need to recreate the table and reload the data.
- If the Delta Log has not been deleted, you can try clearing the Delta Log's cache by running the following command in a Databricks notebook:
%sql
CLEAR CACHE
This will clear the cached state of all Delta tables in the current cluster. If you don't have access to the cluster, you may need to ask your Databricks administrator to run this command for you.
3) If clearing the cache doesn't work, you can try setting the retention policies for the Delta Log and checkpoint files to longer durations, so that they don't get deleted before your pipelines have a chance to run. You can do this by setting the following configuration options in your Databricks cluster:
spark.databricks.delta.retentionDurationCheck.enabled = true
spark.databricks.delta.retentionDurationCheck.intervalHours = 1
spark.databricks.delta.logRetentionDuration = "30 days"
spark.databricks.delta.checkpointRetentionDuration = "2 days"
4) This will enable retention duration checks, which will warn you when the Delta Log or checkpoint files are about to be deleted due to the retention policies. You can then adjust the policies as necessary to ensure that the files are retained for a longer period of time.
I hope this helps you resolve your issue!