Understand Trigger Intervals in Streaming Pipelines in Databricks
When defining a streaming write, the trigger
the method specifies when the system should process the next set of data.
Triggers are specified when defining how data will be written to a sink and controlling the frequency of micro-batches. By default, Spark will automatically detect and process all data in the source that has been added since the last trigger.
NOTE: Trigger.AvailableNow is a new trigger type that is available in DBR 10.1 for Scala only and available in DBR 10.2 and above for Python and Scala.
Thanks
Aviral Bhardwaj
AviralBhardwaj