the fact there are multiple parquet files does not mean all those files are 'active'. Delta lake can do time travel, meaning you can roll back a delta table to a previous state. To be able to do that, it needs the old data.
That is why old data is not removed, and you can see multiple parquet files which are not used in the most recent version of delta_lake.
you can remove them with the VACUUM command:
https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/language-manual/delta-vacuu...