cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

How Increase REPL time to prevent timeout error

mkwparth
New Contributor III

Hi everyone, 

I've tried setting the Spark configuration spark.databricks.repl.timeout to 300, but I’m still getting a REPL timeout error saying it took longer than 60 seconds. It seems like the configuration might be incorrect. Can someone guide me on the correct Spark config to change in order to increase the REPL timeout?

mkwparth_0-1749620824347.png

mkwparth_1-1749620849807.png


Ref to last conversation: https://community.databricks.com/t5/data-engineering/intermittent-timeout-error-while-waiting-for-py...

 

2 ACCEPTED SOLUTIONS

Accepted Solutions

Saritha_S
Databricks Employee
Databricks Employee

Hi @mkwparth 

Good day!!

We get the below error when IPython kernel doesn't start within 80 seconds, the session might fail or timeout with a launch error.

To overcome the issue. Could you please increase the timeout by specifying config

spark.databricks.driver.ipykernel.launchTimeoutSeconds to 300 (5 min, default is 1.2 min). And let us know if you still see the error after applying the configuration.

View solution in original post

mkwparth
New Contributor III

Hi @Saritha_S ,

Yes! I've configured spark config that you said. I'll observe for few days and let you know.

Thanks! For your Help.

View solution in original post

3 REPLIES 3

Saritha_S
Databricks Employee
Databricks Employee

Hi @mkwparth 

Good day!!

We get the below error when IPython kernel doesn't start within 80 seconds, the session might fail or timeout with a launch error.

To overcome the issue. Could you please increase the timeout by specifying config

spark.databricks.driver.ipykernel.launchTimeoutSeconds to 300 (5 min, default is 1.2 min). And let us know if you still see the error after applying the configuration.

mkwparth
New Contributor III

Hi @Saritha_S 
I've updated the Spark configuration as per your suggestion, but I'm still encountering issues.

Out of 6 retry attempts, 4 resulted in the following error:
com.databricks.pipelines.common.errors.deployment.DeploymentException: Communication lost with driver.

The remaining 2 attempts failed due to a REPL timeout error. See the attached screenshots.

Could you please advise on what steps I can take to prevent these errors in the future? Let me know if you need any additional information from my side.

Thanks in advance for your Help!

mkwparth
New Contributor III

Hi @Saritha_S ,

Yes! I've configured spark config that you said. I'll observe for few days and let you know.

Thanks! For your Help.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now