cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Quota Limit Exhausted Error when Creating Data Ingestion with SQL Server Connector (Azure)

Adam_Borlase
New Contributor III

Good Day all,

I am having an issue with our first Data Ingestion Pipelines, I am wanting to connect to our Azure SQL Server with our Unity Connector (I can access the data in Unity Catalog). 

When I am on Step 3 of the process (Source) when it is scanning the data in our Database I get an failure error about the Quota being exceeded. At this stage I have not selected which Virtual Machine should be used and I can see that an FS virtual machine has been allocated but I am still within our Quota limits. 

I am trying to find out either what Quota needs to be increased or checked or if there is anything else that can be done here to create the Data Ingestion Pipeline so I can start moving our team over to Delta Lake.

I have included a picture of our Quotas and the error message I am getting at this stage of the creation.

1 ACCEPTED SOLUTION

Accepted Solutions

szymon_dybczak
Esteemed Contributor III

Hi @Adam_Borlase ,

To apply your policy you need to use API (either via REST API or databricks cli). They've mentioned it in docs. Unfortunately, currently there's no option to do it in UI.

szymon_dybczak_1-1761577248670.png

Basically, you need to use Pipeline API and in clusters definition provide policy_id and set apply_policy_default_values to true:

szymon_dybczak_2-1761577345525.png

 

Configure classic compute for Lakeflow Declarative Pipelines | Databricks on AWS

 

View solution in original post

4 REPLIES 4

szymon_dybczak
Esteemed Contributor III

Hi @Adam_Borlase ,

I've checked and by default if you don't configure gateway setting yourself it will create VM with following types. So check if you didn't exceed quotas for Standard_F4s or for Standard_E4d_v4 family of VMs.

szymon_dybczak_0-1761571861166.png

My finding is in line with following thread with similar issue:

Issue with the SQL Server ingestion in Databricks ... - Databricks Community - 122226

If you want to have a greater control over which VMs are used for gateway just follow steps in official tutorial:

szymon_dybczak_1-1761572064736.png

 

This has helped a lot, thanks so much for your feedback.

I have created a Compute Policy with specific Compute nodes for DLT but it seems that the Data Ingestion creation is still creating a DLT node with the unrestricted policy. How can I set it to use my policy as default? 

szymon_dybczak
Esteemed Contributor III

Hi @Adam_Borlase ,

To apply your policy you need to use API (either via REST API or databricks cli). They've mentioned it in docs. Unfortunately, currently there's no option to do it in UI.

szymon_dybczak_1-1761577248670.png

Basically, you need to use Pipeline API and in clusters definition provide policy_id and set apply_policy_default_values to true:

szymon_dybczak_2-1761577345525.png

 

Configure classic compute for Lakeflow Declarative Pipelines | Databricks on AWS

 

Adam_Borlase
New Contributor III

you for all of your assistance!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now