11-30-2022 02:29 AM
Hello,
We have a business request to compare the evolution in a certain delta table.
We would like to compare the latest version of the table with the previous one using Delta time travel.
The main issue we are facing is to retrieve programmatically using SQL the N-1 version and give it as a parameter to our SQL query.
The idea is to get something like this:
%sql
select * from schedule@vN
except all
select * from schedule@vN-1
We know that the latest version can be used by simply using the delta table name (as it uses by default the last version), but how can we retrieve the previous delta version.
We tried to use the 'hivevar' variable (like below ) but we are getting an error since the hivevar is not evaluated.
SET hive.variable.substitute=true;
SET hivevar:ver=SELECT VERSION FROM (DESCRIBE HISTORY schema.my_table) LIMIT 1;
SELECT * FROM schema.my_table VERSION AS OF ${hivevar:ver};
The error code below:
- Error in SQL stateùent: parseException:
missing INTEGER_VALUE at 'SELECT'(line 1, pos 84)
Any idea how to achieve this ?
11-30-2022 03:27 AM
Hi @Nabil AMEZIANE ,
What you were doing works actually...
You just need to fetch the latest version into a variable like below
x= spark.sql("""select max(version) from (desc HISTORY <table-name>)""").collect()[0][0]
Be sure to check the operation type too in the history table if there are DELETE operations.
Now you have a integer x with latest version number available. For previous version all you need to do is to do a x-1. Then you have both version numbers.
Now pass these variables into your sql query reading those versions. I would do it as below
spark.sql("""
select * from table @v{0}
minus
select * from table @v{1}
""".format(x, x-1))
Hope this helps...Cheers
11-30-2022 03:31 AM
@Uma Maheswara Rao Desula thank you for your Quick answer but i have the constraints to work only with sql format (i can’t use scala/Spark or pyspark code)
11-30-2022 06:27 AM
In the docs it says that "'Neither timestamp_expression nor version can be subqueries."
So it does sound challenging. I also tried playing with widgets to see if it could be populated using SQL but didn't succeed. With python it's really easy to do.
11-30-2022 11:33 AM
Yes, exactly with simple code it is very easy to do but not possible at that time with only SQL (i tried using hive/spark variable the result is same thing because the variable not executed) i think we need something like "hive cmd" on databricks to do the first part of collecting the result of the first query in variable then apply it to second query but we have to use shell and it is not the goal )
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group