On the afternoon of the 2025-07-30 my team began to experience issues with pipeline tasks that were set to full refresh and full refresh only. These pipelines were defined to use serverless, and the only way we were able to get them back online was to convert back to classic compute. I was unable to find much information about the error - has anyone else ran in to this, or have any idea what is occurring?
Error Message:
org.apache.spark.sql.streaming.StreamingQueryException: [STREAM_FAILED] Query terminated with exception: Dynamic admission control isn't available for batch 0 SQLSTATE: XXKST
To repeat, this only affected 'full refresh' pipeline tasks that were set to run in a serverless environment.