Hi @Manjula_Ganesap, The behaviour of your code could be influenced by various factors, such as the state of your data, the specific operations you're performing, and the configuration of your environment.
From the provided information, here are a few points that might be relevant to your situation:-
Delta tables always return the most up-to-date information, so there is no need to call REFRESH TABLE
manually after changes. This is handled automatically [source](https://docs.databricks.com/delta/best-practices.html).
- Delta tables track the set of partitions present in a table and update the list as data is added or removed, so there's no need to run ALTER TABLE [ADD|DROP] PARTITION or MSCK .[source](https://docs.databricks.com/delta/best-practices.html).
- Directly modifying, adding, or deleting Parquet data files in a Delta table can lead to lost data or table corruption [source](https://docs.databricks.com/delta/best-practices.html).
- If best practices for Delta tables are not followed, table statistics could differ, even if the tables are identical. If the table statistics are different, Spark can generate a different plan than it might have done if both tables had the same statistics [source](https://kb.databricks.com/delta/different-tables-with-same-data-generate-different-plans-when-used-i...).
However, it's impossible to provide a more accurate answer without more specific information about your code and the operations you're performing.