Hi,
I have a Spark streaming process that reads data from a Kafka topic to Azure DL
This is how I implement the MERGE capability into the delta table.
In addition to the same topic, I have another streaming process that simply writes data to DL
In kafka topic, data is not deleted, so you can always initialize from the beginning.
A few days ago another column was added to the kafka topic
1. In MERGE, I don't see that the column has been added, even though it is set to optional
.option("mergeSchema", "true")
option("mode", "FAILFAST")
2. In the second process that I ran at the beginning I get an error
Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
Although in the settings it is explicitly marked. option("mode", "PERMISSIVE")
It could be that because it's the same kafkatopic they conflict with the definition of mode "FAILFAST?
1. How do I cause a column to be added to the process with MERGE?
2. How do I start the process?
Thanks