I am using Databricks as a Community Edition user with a limited cluster (just 1 Driver: 15.3 GB Memory, 2 Cores, 1 DBU). I am trying to run some custom algorithms for continuous calculations and writing results to the delta table every 15 minutes along with notifying me by email using SMTP protocol.
The problem is I intend to do calculations let's say for a particular depth (imaging building a hierarchical-deterministic wallet that is represented as a tree) and those calculations may take a few hours or even up to one day. But for some reason, my cluster is being terminated after 1 hour of processing.
I was looking for a solution to similar issues and suggestions like making spark.sql("select 1") just to keep the cluster alive even if I do it as a daemon process never worked for me. Even as I mentioned before I do df.write(...) results to my table but it also doesn't keep the cluster alive enough time.
So, I am wondering if is there any solution to my problem and if there is another way to keep the cluster alive or if it is just limited for Community Edition users to do processing on a cluster longer than 1 hour.
thanks in advance