Delta cache is per file so if you have dataset which data structure is splited in 100 files and 1 file was updated theoretically that 1 only should be evicted. It is automatic.
There is also Spark cache which is totally manual so you control persist/cache operation.
When you use Databricks SQL endpoint delta cache is automatically handled, for delta cache optimized VMS is enabled by default for other by setting it in spark config.
My blog: https://databrickster.medium.com/