Monday
Good Day all,
I am having an issue with our first Data Ingestion Pipelines, I am wanting to connect to our Azure SQL Server with our Unity Connector (I can access the data in Unity Catalog).
When I am on Step 3 of the process (Source) when it is scanning the data in our Database I get an failure error about the Quota being exceeded. At this stage I have not selected which Virtual Machine should be used and I can see that an FS virtual machine has been allocated but I am still within our Quota limits.
I am trying to find out either what Quota needs to be increased or checked or if there is anything else that can be done here to create the Data Ingestion Pipeline so I can start moving our team over to Delta Lake.
I have included a picture of our Quotas and the error message I am getting at this stage of the creation.
Monday - last edited Monday
Hi @Adam_Borlase ,
To apply your policy you need to use API (either via REST API or databricks cli). They've mentioned it in docs. Unfortunately, currently there's no option to do it in UI.
Basically, you need to use Pipeline API and in clusters definition provide policy_id and set apply_policy_default_values to true:
Configure classic compute for Lakeflow Declarative Pipelines | Databricks on AWS
Monday
Hi @Adam_Borlase ,
I've checked and by default if you don't configure gateway setting yourself it will create VM with following types. So check if you didn't exceed quotas for Standard_F4s or for Standard_E4d_v4 family of VMs.
My finding is in line with following thread with similar issue:
Issue with the SQL Server ingestion in Databricks ... - Databricks Community - 122226
If you want to have a greater control over which VMs are used for gateway just follow steps in official tutorial:
Monday
Monday - last edited Monday
Hi @Adam_Borlase ,
To apply your policy you need to use API (either via REST API or databricks cli). They've mentioned it in docs. Unfortunately, currently there's no option to do it in UI.
Basically, you need to use Pipeline API and in clusters definition provide policy_id and set apply_policy_default_values to true:
Configure classic compute for Lakeflow Declarative Pipelines | Databricks on AWS
Monday
you for all of your assistance!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now