Hi @Michał ,
One detail/feature to consider when working with Declarative Pipelines is that they manage and auto-tune configuration aspects, including rate limiting (maxBytesPerTrigger or maxFilesPerTrigger). Perhaps that's why you could not see this...
hi @RikL
Thank you for reaching out.
It doesn't seem you are doing anything wrong. Per the documentation, indeed, the event_log spec should be retrieved when you run the pipeline.get, then the spec.
I was able to test and confirm the correct behavio...
Hi @Pat
I hope you are doing well, and thank you for reaching out.
As you mentioned, the endpoint for AlertsV2 does not provide an explicit action for sharing. This is handled via the "ACL/Permissions" within Databricks SQL group via:
/api/2.0/previ...
Hello @apurvasawant
I'm sorry you are seeing this behavior while using Jobs. Definitely, these messages don't help much.
When this happens, I suggest taking a step back and reviewing the configuration of your Job and some troubleshooting:
What is th...
hi @Dharinip
Cleaning up the shared log shows "Before" and "After" fingerprints. Hopefully can give us more info in how to proceed
Before
[
{
"id": 12
},
{
"qualifier": []
},
{
"class": "GreaterThan",
"num-children": 2,
...