As the delta loag of target table already captures the table metadata so the target dataframe schema is validated against this metadata correct?
Yes!

>In case of Autoloader it throws an error on column addition due to the default schema evolution mode right whereas in delta streaming tables as source errors occur in case of non-additive schema changes could you please confirm if my understanding is correct?
Yea too!, but the reasons are different. Autoloader's default schema evolution mode (addNewColumns) deliberately stops the stream when it detects a new column in the incoming files. It does this so it can update the inferred schema at the schemaLocation before continuing. It's a controlled pause, not really an error, restart the stream and it picks up with the new column included. You can change this behavior with cloudFiles.schemaEvolutionMode if you want it to handle new columns silently.

So, Autoloader is stricter about column additions by default, Delta streaming source is stricter about column removals and renames.

And In case of message bus like kafka the offset is the consumer group offset right?
yes, when you use Kafka as a streaming source, the offset tracked in the Spark checkpoint is essentially the consumer offset (topic, partition, offset tuple). It's not using Kafka's consumer group offset management though, Spark manages its own offsets in the checkpoint directory independently from Kafka's consumer group mechanism. So even if you have a consumer group configured, Spark ignores it for offset tracking and relies entirely on its own checkpoint.

Some documentation that can help you:
Stream processing with Apache Kafka and Databricks
- Auto Loader Config

Happy to keep going if you have more questions.

Hope this helps! If it does, could you please mark it as "Accept as Solution"? That will help other users quickly find the correct fix.