by
Chinu
• New Contributor III
- 916 Views
- 1 replies
- 1 kudos
Hi Team, Is it possible I can use "query_start_time_range" filter from the api call to get the query data only from now to 5 mins ago?Im using telegraf to call query history api but it looks like Im reaching the max return and I can't find how to use...
- 916 Views
- 1 replies
- 1 kudos
Latest Reply
Have you checked this https://docs.databricks.com/api-explorer/workspace/queryhistory/list you can list the queries based on time range as well. So you can try passing the fields in the filter_by parameter. Then pass the value as (current time - 5 m...
by
Chinu
• New Contributor III
- 326 Views
- 0 replies
- 0 kudos
I know query history api provides filter_by option with start and end time in ms but I was wondering if I can get only the last 5 mins of query data every time I run the api call (using telegraf to call the api). Is it possible I can use relative dat...
- 326 Views
- 0 replies
- 0 kudos
by
Chinu
• New Contributor III
- 799 Views
- 1 replies
- 1 kudos
Hi, Im trying to pull query history filtered by warehouse id but my url is not working. Do you have an example on how the url will looks like?I tried this --> https://**.cloud.databricks.com/api/2.0/sql/history/queries?filter_by={"warehouse_id":"193b...
- 799 Views
- 1 replies
- 1 kudos
Latest Reply
Chinu
New Contributor III
Oh, looks like I need to add this raw data. { "filter_by": { "warehouse_ids": "193b15a590ed23d2" }}
- 3578 Views
- 12 replies
- 3 kudos
We have a SQL workspace with a cluster running that services a number of self service reports against a range of datasets. We want to be able to analyse and report on the queries our self service users are executing so we can get better visibility of...
- 3578 Views
- 12 replies
- 3 kudos
Latest Reply
Hey there @Alex Davies​ Hope you are doing great. Just checking in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!
11 More Replies
- 5076 Views
- 11 replies
- 1 kudos
Hi! I have some jobs that stay idle for some time when getting data from a S3 mount on DBFS, this are all SQL queries on Delta, how can I know where is the bottle neck, duration, cue? to diagnose the slow spark performance that I think is on the proc...
- 5076 Views
- 11 replies
- 1 kudos
Latest Reply
We found out we were regeneratig the symlink manifest for all the partitions on this case. And for some reason it was executed twice, at start and end of the job.delta_table.generate('symlink_format_manifest')We configured the table with:ALTER TABLE ...
10 More Replies
- 1498 Views
- 4 replies
- 5 kudos
We have a SQL workspace with a cluster running that services a number of self service reports against a range of datasets. We want to be able to analyse and report on the queries our self service users are executing so we can get better visibility of...
- 1498 Views
- 4 replies
- 5 kudos
Latest Reply
Looks like the people have spoken: API is your best option! (thanks @Werner Stinckens​ @Chris Grabiel​ and @Bilal Aslam​ !) @eni chante​ Let us know if you have questions about the API! If not, please mark one of the replies above as the "best answ...
3 More Replies