It is best to avoid collecting stats on long strings. You typically want to collect stats on column that are used in filter, where clauses, joins and on which you tend to performance aggregations - typically numerical values
You can avoid collecting stats on long strings and improve table processing time by either moving string columns outside of the first 32 columns of a delta table on which stats are collected by default
- alter table change column col after col32
Alternatively you can set stats collection columns to a smaller number using the dataSkippingNumIndexedCols setting
- set spark.databricks.delta.properties.defaults.dataSkippingNumIndexedCols = 3