cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How do you access a streaming live table's snapshots?

logan0015
Contributor

I have read that delta live tables will keep a history of 7 days. However after creating a streaming live table and using the dlt.apply_changes function. With this code

def run_pipeline(table_name,keys,sequence_by):
    lower_table_name = table_name.lower()
    @dlt.view(name = f"{lower_table_name}_schema",
                     comment= "Test")
    def create_raw_schema():
        return(spark.read.format("parquet")
              .option("inferschema", True)
              .load(f"s3://mybucket/test/dbo/{table_name}/")
              .limit(10)
              )
 
    #creating hist table
    @dlt.table(name=f"s{lower_table_name}_hist",
                   comment = "test")
    def create_hist_table():
      return (
        spark.readStream.format("cloudFiles")
          .option("cloudFiles.format", "parquet")
          .schema(dlt.read(f"{lower_table_name}_schema").schema)
          .load(f"s3://mybucket/test/dbo/{table_name}/")
      )
 
    #creating current table
    dlt.create_streaming_live_table(
        name = f"{lower_table_name}",
        path = f"s3://mybucket/test/cdc/{table_name}__ct/")
    
    dlt.apply_changes(
        target = f"{lower_table_name}",
        source = f"{lower_table_name}_hist",
        keys = keys,
        sequence_by = col(sequence_by)
    )

when I attempt to access any version history using

SELECT * FROM dlt.my_table TIMESTAMP AS OF "2022-10-10"

I get this message "Cannot time travel views."

3 REPLIES 3

Hubert-Dudek
Esteemed Contributor III

Which tables from your example do you want to query, as none of them is my_table?

I've changed some of code to remove any personal information. The table name is being passed into the pipeline's function from another section of code. my_table was just an example name.

Anonymous
Not applicable

Hi @Logan Nicol​ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.