I am trying to run an incremental data processing job using python wheel.
The job is scheduled to run e.g. every hour.
For my code to know what data increment to process, I inject it with the {{start_time}} as part of the command line, like so
["end_date={{start_time}}"]
I have noticed two things:
* The start_time is seems to refer to when the scheduler actually woke up, vs. when it was meant to wake up. e.g. instead of passing the exact on-the-hour time, it can contain 2-3 seconds past the hour.
* When I run a job with two tasks which run one after the other, they each get a different {{start_time}} value. Since scheduling is done on job level vs. task level, and you have a feature for injecting the time to the job, I can't see what is the point of passing different values to each task.
Each one of these behaviors make {{start_time}} not reliable enough for processing time windows of data.
Other standard schedulers like Airflow and Prefect do pass the correct "planned job trigger time" to the jobs, and are reliable enough for processing time windows.
see here and here.
Can you share what's the best practice for injecting the planned trigger-time reliably to the job and all of its tasks?
Thanks