I have the below sql command where i am doing a dry run with vacuum.
%sql
VACUUM <table_name> RETAIN 500 HOURS DRY RUN;
wanted to check if there is a way to achieve this in python api?
I tried the below. But, not sure if there is a parameter that we can pass for dry run?
from delta.tables import DeltaTable
path = "path/to/delta_table"
dt = DeltaTable(spark, path)
vacuum_out = dt.vacuum(retentionHours=500)
Like retention hours do we have a parameter for dry run?
If it's not possible through python api, is there a pyspark sql equivalent of this ?
Thank you