We have been using Databricks for some time and didn't knew that S3 bucket versioning was not recommended. We have disabled it now. What are the next steps that we need to take to clean up the data, should we create a lifecycle rule to delete older versions manually or there is a way databricks handles that itself.
PS: We are running vaccum on that tables in our catalogue, but need general guidance on how to manage this situation to save on storage costs without impacting any current flow.