But the same python code works fine when executed outside of a DLT pipeline. When I run the following in an interactive notebook it returns the source columns + CDF columns, which is logical because I am using the readChangeFeed option while reading.
spark.read.option('readChangeFeed','True').option('startingVersion',1).table(<source_table_name>)
The problem I stated occurs only when it is executed within a DLT pipeline which is strange.