Once a private endpoint rule is deactivated, it isn't immediately removed. Instead, it will be scheduled for purging after a set time period. In your case, the rule is slated for purging at the timestamp mentioned. This situation can occur in scena...
Hi @omjohn Can you try downgrading to a Databricks Runtime 13.3 LTS, which uses Spark 3.4.x, officially supported by sparklyr 1.8.1. I believe it would provide a more stable and better-tested integration.
Hello @anilsampson good day!I believe these are the options you are try -Instead of using a timestamp, you can directly reference an explicit version number in your queries. This completely avoids dynamic/deterministic date handling or any need for v...
hi @sachamourier ,I see that you have quotas available for Standard DDSv5 Family vCPUs. Is your cluster using this exact node type?The QuotaExceeded error typically indicates that your request for additional resources for a specific VM size exceeds t...