cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Serverless Compute - Spark - Jobs failing with Max iterations (1000) reached for batch Resolution

Ramana
Valued Contributor

Hello Community,

We have been trying to migrate our jobs from Classic Compute to Serverless Compute. As part of this process, we face several challenges, and this is one of them.

When we try to execute the existing jobs with Serverless Compute, if the job deals with a small amount of data or a small number of stages, Serverless Compute works great. But when we try to use the Serverless Compute for processing a large amount of data with a large number of intermediate transformations, the job fails with the following error:

Exception: (java.lang.RuntimeException) Max iterations (1000) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value

This error indicates that the query plan required more than the default 1000 iterations to resolve, likely due to deeply nested logic or complex transformations in our code. However, in serverless environments, the spark.sql.analyzer.maxIterations configuration is not accessible or overridable, as it is not exposed via Spark Connect.

Has anyone faced the similar issue?

Any suggestion or recommendation is greatly appreciated.

Screenshots:

Ramana_1-1757620107637.png

 

Ramana_0-1757620075091.png

#ServerlessCompute

#DataEngineering

#ClassicCompute-to-ServerlessCompute-Migration

#Migration

 

 

 

Thanks
Ramana
0 REPLIES 0

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now