11-25-2024 12:55 PM
I am trying to send Databricks cluster logs to Grafana using an init_script where I define the system journals to consume. The issue I am facing is that I cannot get the driver logs, standard error, and output to reach Grafana. Is there something specific I need to enable at the cluster level to expose these logs to the system?
These are the journals I defined.
discovery.relabel "journal_relabel_rules" {
targets = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
rule {
source_labels = ["__journal__boot_id"]
target_label = "boot_id"
}
rule {
source_labels = ["__journal__transport"]
target_label = "transport"
}
rule {
source_labels = ["__journal_priority_keyword"]
target_label = "level"
}
}
11-25-2024 03:38 PM
Are you following a guide to have this set up that you can share with us for information? You can also set up cluster log delivery to an S3 destination and read the logs from there?
11-26-2024 04:45 AM - edited 11-26-2024 04:47 AM
@Walter_C I followed this post but used the Alloy binary to send logs directly to Loki as the source, which Grafana then consumes. I chose not to send logs to S3 to avoid aditional work, preferring to send them directly from the cluster. My question is whether I need to configure anything else on the cluster to expose the driver logs and forward them to Grafana.
11-26-2024 06:20 AM
Does it collect any other set of logs, just missing those ones or not collecting any log?
11-26-2024 07:12 AM - edited 11-26-2024 07:13 AM
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group