Hi,
I know how filtering a delta table on a partition column is a very powerful time-saving approach, but what if this column appears as a CONCAT in the where-clause?
I explain my case: I have a delta table with only one partition column, say called col1. I need to interrogate this table through an API request by using a serverless SQL warehouse in Databricks SQL, and for my purpose it is simpler to implement a filter as a CONCAT of col1 together with another column.
Is Spark smart enough to understand that this table is partitioned on one of the two columns, or do I lose the partition info?
Thanks