12-01-2022 07:31 PM
Hi, there. I encountered an issue when I was trying to create my delta live table pipeline.
The error is "DataPlaneException: Failed to launch pipeline cluster 1202-031220-urn0toj0: Could not launch cluster due to cloud provider failures. azure_error_code: OperationNotAllowed, azure_error_message: Operation could not be completed as it results in exceeding approved standardFSF...".
However, when I checked my azure subscription, it showed that I had much enough quota space. I don't how to fix this issue as I'm new to the delta live table.
12-02-2022 12:01 AM
Hi @Simon Xu,
You're not alone.
I encountered this issue before. The issue comes from Azure site not Databricks.
You need to check the number of cores, ram, CPU in your Warehouse cluster then compare the resources in Azure resource group hosted Databricks workspace then if you don't have
enough resource you need to increase the quota to higher number.
Feel free to ask me if you have any questions 😀
BR,
Jensen Nguyen
12-01-2022 11:53 PM
Hi @Simon Xu
I hope this thread might solve your issue..
Cheers
12-02-2022 12:01 AM
Hi @Simon Xu,
You're not alone.
I encountered this issue before. The issue comes from Azure site not Databricks.
You need to check the number of cores, ram, CPU in your Warehouse cluster then compare the resources in Azure resource group hosted Databricks workspace then if you don't have
enough resource you need to increase the quota to higher number.
Feel free to ask me if you have any questions 😀
BR,
Jensen Nguyen
01-30-2023 03:18 PM
Thanks, Jensen. It works for me!
11-30-2023 07:32 AM
Unfortunately, I just encountered this error too, and followed your solution but it's still not working. My Usage + quota on Azure is 4 out of 10 (6) and the required DBs compute is 4 cores. However in my case, I used a single node. I strongly suspect I have to switch to a multi-node cluster, and then request for an increase in cores from Azure. I'll be back with an update!
01-26-2023 11:20 AM
@Simon Xu
I suspect that DLT is trying to grab some machine types that you simply have zero quota for in your Azure account. By default, below machine type gets requested behind the scenes for DLT:
AWS: c5.2xlarge
Azure: Standard_F8s
GCP: e2-standard-8
You can also set them explicitly from here.
Regards,
Arpit Khare
12-21-2023 02:26 AM
I followed @arpit suggestion and set the cluster configuration explicitly in the JSON file and solved the issue.
a week ago
you can create the pool instance in the databricks under compute/pool and assign the value in the json of the DLT pipeline. With this, we will control on pool min workers and max workers and the reuse of the pools available by other pipelines.
"node_type_id": "i4i.xlarge",
"driver_node_type_id": "i4i.2xlarge",
"instance_pool_id": "pool id from the pool configured in pools ",
"driver_instance_pool_id": "pool id from the pool configured in pools ",
"autoscale": {
"min_workers": 1,
"max_workers": 6,
"mode": "ENHANCED"
}
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group