cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks notebook taking too long to run as a job compared to when triggered from within the notebook

curious-case-of
New Contributor II

I don't know if this question has been covered earlier, but here it goes - I have a notebook that I can run manually using the 'Run' button in the notebook or as a job.

The runtime when I run from within the notebook directly is roughly 2 hours. But when I execute it as a job, the runtime is huge (around 8 hours)

. The piece of code which takes the longest time is calling an applyInPandas function, which in turn calls a pandas_udf which trains an auto_arima model (pmdarima).

Can anyone help me figure out what might be happening? I am clueless.

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager
3 REPLIES 3

Kaniz
Community Manager
Community Manager

Hi @Vidisha Kanodia​ ,

There is a similar thread - https://community.databricks.com/s/question/0D53f00001pCk29CAC/performance-for-pyspark-dataframe-is-...

Please have a look at that.

Kaniz
Community Manager
Community Manager

Hi @Vidisha Kanodia​ , Just a friendly follow-up. Do you still need help? Please let us know.

wvl
New Contributor II

We're seeing the same behavior.. Good performance using interactive cluster.

Using identically sized job cluster, performance is bad.

Any ideas?