Hi everyone,
Thank you for your responses to my question.
@szymon_dybczak, if I understood correctly, your suggestion is based on running the Databricks job in continuous mode. However, this might incur significant costs if the cluster is running every hour.
@filipniziol, your proposal seems like a viable solution. I would just like to get a clearer idea of the associated costs to be able to compare the two options.
For clarification, the initial notebook is designed to run once a day to update and compute the JSON list. Another notebook is needed to process this JSON data and handle the post-processing, starting one hour before the "time_to_send."