I have a hive table in Delta format with over 1B rows, when I check the Data Explorer in the SQL section of Databricks it notes that the table size is 139.3GiB with 401 files but when I check the S3 bucket where the files are located (dbfs:/user/hive/warehouse/large_table) it's over 110TB and contains over 100K files.
Is it possible to reduce the size of the S3 bucket without losing any data in the table?