You are almost there. From the help page:
Delta Lake has a safety check to prevent you from running a dangerous
VACUUM command. If you are certain that there are no operations being performed on this table that take longer than the retention interval you plan to specify, you can turn off this safety check by setting the Spark configuration property
spark.databricks.delta.retentionDurationCheck.enabled to false.
Also:
It is recommended that you set a retention interval to be at least 7 days, because old snapshots and uncommitted files can still be in use by concurrent readers or writers to the table. If
VACUUM cleans up active files, concurrent readers can fail or, worse, tables can be corrupted when
VACUUM deletes files that have not yet been committed. You must choose an interval that is longer than the longest running concurrent transaction and the longest period that any stream can lag behind the most recent update to the table.