cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Migration of Synapse Data bricks activity executions from All purpose cluster to New job cluster

IshaBudhiraja
New Contributor II

Hi,

We have been planning to migrate the Synapse Data bricks activity executions from 'All-purpose cluster' to 'New job cluster' to reduce overall cost. We are using Standard_D3_v2 as cluster node type that has 4 CPU cores in total. The current quota for Standard Dv2 family is 50 vCPU.

As per the calculation, Number of worker nodes= Total vCPU quota/vCPUs per node.

In this case: 50 vCPUs / 4 vCPUs per node = 12.5

Since we can't have a fraction of a node, we have selected 12 worker nodes while creating Linked Service for new job cluster.

While running the synapse pipeline, we are getting this error- "Reason: INVALID_ARGUMENT (CLIENT_ERROR). Parameters: data bricks _error_ message: Operation could not be completed as it results in exceeding approved standardDv2Family Cores quota."

Attaching the screenshot of the error message for your reference:

 

IshaBudhiraja_0-1711688756158.png

It would be great if someone could check this issue and provide me with your suggestions.

2 REPLIES 2

Hi, Thanks for sharing these details.

Could you please suggest some other method to resolve this issue without exceeding the quota? Can we try it by some other method like adjusting number of worker nodes, batch count, etc.?

Hi,

We are currently working on resolving the issue without exceeding the quota, if possible.

Here are our observations:

1. Notebook Execution Time Difference:

  • When running a notebook activity using the interactive cluster, it completed in 5 minutes and 7 seconds. However, the same notebook took 1 hour and 32 minutes to execute using the job cluster.

         Attaching the configuration details for both the clusters:

Interactive Cluster:

IshaBudhiraja_0-1712085080441.png

New Job Cluster:     

IshaBudhiraja_1-1712085080444.png

 

2. Job Cluster having Node Type Standard_D8ds_v4 :

  • We are getting an error ‘Node type Standard_D8ds_v4 is not supported’ when we tried using new job cluster having cluster node type as Standard_D8ds_v4, however it’s available in the list we are getting in the error message.

IshaBudhiraja_2-1712085080447.png

 

It would be helpful if you could check these issues and provide us with your suggestions.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group