I have a parquet files with a column g1 with schema
StructField(g1,IntegerType,true)
Now I have a query with filter on g1.
What's weird in the SQL viewer is that spark is loading all the rows from that file.
Even though in the physical plan I can see the "pushedFilter" condition being set.
This is created as a delta table on dbfs.
Any pointer on this would be helpful.
Thanks