"One possible workaround could be to (1) temporarily enable the IP Access List feature, (2) add the necessary IP addresses to the list, and then (3) disable the feature again. This way, you can add the IP addresses you need without blocking the current IP address."
This will not work When we enabling the IP whitelist, from that moment forward, once we add a single list (step 2), that list will immediately become active, with the risk of locking us out (as that list might not contain the pipeline agent IP). This is in fact what this API-check should prevent, and works as intended.
The check is too aggressive though, as it also does the check when the access list is disabled (which it shouldn't). If it is disabled, you should be free to add any IP you like, as it is not actively enforced anyways. The actual check should then only take place one you enable the access list (to prevent locking out the active caller).
"Another option could be to contact the support team for the API or the platform you're using."
What team would that be? As from my perspective the platform is Databricks and the API is IP Access List API (so, you guys). Is there a team within Databricks that can try and reproduce this behavior and put the bug on the backlog?
"In any case, it's important to ensure that the security of your workspace is maintained, so it's good that you're taking measures to control access to it."
Yes! I agree! Would be nice if Databricks can solve this bug, so that the API becomes more stable and robust for all users.
For other readers: I currently work around this issue by first adding an access lists that allow 0.0.0.0/0 (any IP) as a preprocess. Then I run the actual pipeline and the lists. As a postprocess I delete the 0.0.0.0/0 again.