โ06-02-2024 10:58 PM - edited โ06-03-2024 02:46 AM
I want to fetch compute metrics(hardware, gpu and spark) and use them in certain dashboard on Databricks, however i'm not able to fetch them. i have tried GET API request and system tables. The system tables only have CPU utilization and memory utilization per node but i want that on the granular level as shown in the graphs in compute metrics screen. How can i fetch these details and download them in json or csv format?
โ06-03-2024 09:23 PM
Hey thanks for the response. I'm aware that these metrics are available in compute metrics UI but I wanted the data which I can export and use for analysis. is there any way to get the data which is being used to create these graphs, manually recording is difficult because data keeps getting updated every hour. Could you please provide any insights on external tools that can be used to automate data extraction from UI?
โ06-26-2024 04:50 AM
I am also curious to see if there is any alternative to programatically (and not just from the UI) fetch this data. This would be highly valuable. Thanks!
โ06-10-2024 05:16 AM
Replying for the updates.
โ09-19-2024 12:55 AM
Can you tell what 3rd party tools you are referring to?
โ01-06-2025 09:16 AM
How can we store the CPU & Memory metrics for GCP databricks centrally and setup some alerts incase if the usage is high and monitor the performance.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group