Hello Community,
We have been trying to migrate our jobs from Classic Compute to Serverless Compute. As part of this process, we face several challenges, and this is one of them.
When we try to execute the existing jobs with Serverless Compute, if the job deals with a small amount of data or a small number of stages, Serverless Compute works great. But when we try to use the Serverless Compute for processing a large amount of data with a large number of intermediate transformations, the job fails with the following error:
Exception: (java.lang.RuntimeException) Max iterations (1000) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value
This error indicates that the query plan required more than the default 1000 iterations to resolve, likely due to deeply nested logic or complex transformations in our code. However, in serverless environments, the spark.sql.analyzer.maxIterations configuration is not accessible or overridable, as it is not exposed via Spark Connect.
Has anyone faced the similar issue?
Any suggestion or recommendation is greatly appreciated.
Screenshots:


#ServerlessCompute
#DataEngineering
#ClassicCompute-to-ServerlessCompute-Migration
#Migration
Thanks
Ramana