- 18134 Views
- 8 replies
- 9 kudos
I am looking for something preferably similar to Windows task manager which we can use for monitoring the CPU, memory and disk usage for local desktop.
- 18134 Views
- 8 replies
- 9 kudos
Latest Reply
Some important info to look in Gangalia UI in CPU, memory and server load charts to spot the problem:CPU chart :User %Idle %High percentage of user % indicates heavy CPU usage in the cluster.Memory chart : Use %Free %Swap % If you see purple line ove...
7 More Replies
- 585 Views
- 1 replies
- 0 kudos
Are there any event streams that are or could be exposed in AWS (such as Cloudwatch Eventbridge events or SNS messages? In particular I'm interested in events that detail jobs being run. The use case here would be for monitoring jobs from our web app...
- 585 Views
- 1 replies
- 0 kudos
Latest Reply
Yes, there are several event streams in AWS that can be used to monitor jobs being run. Your Texas BenefitsCloudWatch Events: This service allows you to set up rules to automatically trigger actions in response to specific events in other AWS service...
- 2266 Views
- 3 replies
- 0 kudos
I am very new to Databricks, just setting up with things. I would like to explore various features of Databricks and start playing around with the environment.I am curious to know what are the metrics should be considered for monitoring the complete ...
- 2266 Views
- 3 replies
- 0 kudos
Latest Reply
Databricks is a powerful platform for data engineering, machine learning, and analytics, and it is important to monitor the performance and health of your Databricks environment to ensure that it is running smoothly.Here are a few key metrics that yo...
2 More Replies
- 2547 Views
- 2 replies
- 4 kudos
Hi, is there a way to find out/monitor which users has used my cluster, how long and how many times in an azure databricks workspace ?
- 2547 Views
- 2 replies
- 4 kudos
Latest Reply
Hello, You can activate Audit logs ( More specifically Cluster logs) https://learn.microsoft.com/en-us/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs It can be very helpful to track all the metrics.
1 More Replies
by
Lizz
• New Contributor II
- 1836 Views
- 2 replies
- 3 kudos
We have a spark streaming application written in Pyspark that we'd like to monitor with Datadog. By default, datadog collects a couple of streaming metrics like 'spark.structured_streaming.processing_rate' and 'spark.structured_streaming.latency'. Ho...
- 1836 Views
- 2 replies
- 3 kudos
Latest Reply
Hi @Liz Zhang​ , We haven't heard from you on the last response from @Shanmugavel Chandrakasu​​, and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful ...
1 More Replies
- 700 Views
- 0 replies
- 1 kudos
Adding these optionsEXTRA_JAVA_OPTIONS = (
'-Dcom.sun.management.jmxremote.port=9999',
'-Dcom.sun.management.jmxremote.authenticate=false',
'-Dcom.sun.management.jmxremote.ssl=false',
)is enough in vanilla Apache Spark, but apparently it ...
- 700 Views
- 0 replies
- 1 kudos
by
YFL
• New Contributor III
- 3094 Views
- 11 replies
- 6 kudos
Hi, I want to keep track of the streaming lag from the source table, which is a delta table. I see that in query progress logs, there is some information about the last version and the last file in the version for the end offset, but this don't give ...
- 3094 Views
- 11 replies
- 6 kudos
Latest Reply
Hey @Yerachmiel Feltzman​ I hope all is well.Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!
10 More Replies
- 1190 Views
- 0 replies
- 2 kudos
How can I integrate Databricks clusters with Prometheus? I tried adding the following Spark property to my cluster but cannot find the Prometheus metrics endpoints. Any thoughts?
spark.ui.prometheus.enabled = true
- 1190 Views
- 0 replies
- 2 kudos