This is working per design! This is the expected behavior.
When the cluster is in terminated state, the logs are serviced by the Spark History server hosted on the Databricks control plane.
When the cluster is up and running the logs are serviced by the Spark Driver at that point in time.
Because of this architecture, when the cluster is in the terminated state you will see the logs for the last 30 days and when the cluster is up and running you will see the logs from the last restart/start of the cluster.