We have a number of jobs on our databricks workspaces. All job clusters are configured with a dbfs location to save the respective logs (configured from Job cluster -> "Advanced options" -> "Logging").
However, the logs are retained in the dbfs indefinitely, even after 60 days (which is the retention period for job runs history). Is it possible to configure a clean up policy on logs older than a defined threshold via the databricks workspace? If it's not possible to achieve this within databricks, what would be a best practice to do this on Azure?