cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

" Token is expiring within 30 seconds." when running a job using Databricks SDK

dzmitry_tt
New Contributor

During attempt (in Azure DevOps environment) of running a job (and getting the result of the run) using Databricks SDK for Python, I've got this error:

databricks.sdk.core.DatabricksError: Token is expiring within 30 seconds. (token expired).
 
The stack trace is:
run_result = w.jobs.run_now(
File "/azp/agent/_work/r1/a/_cwp-dp-core/Databricks/s/venv/lib/python3.10/site-packages/databricks/sdk/service/_internal.py", line 45, in result return self._waiter(callback=callback, timeout=timeout, **kwargs)
File "/azp/agent/_work/r1/a/_cwp-dp-core/Databricks/s/venv/lib/python3.10/site-packages/databricks/sdk/service/jobs.py", line 2720, in wait_get_run_job_terminated_or_skipped poll = self.get_run(run_id=run_id)
File "/azp/agent/_work/r1/a/_cwp-dp-core/Databricks/s/venv/lib/python3.10/site-packages/databricks/sdk/service/jobs.py", line 3013, in get_run json = self._api.do('GET', '/api/2.1/jobs/runs/get', query=query)
File "/azp/agent/_work/r1/a/_cwp-dp-core/Databricks/s/venv/lib/python3.10/site-packages/databricks/sdk/core.py", line 999, in do raise self._make_nicer_error(message=message) from None
 
The function used to run a job is databricks.sdk.service.jobs.run_now().result(), the 'timeout' parameter there is set to 'datetime.timedelta(days=1)'.

Also, I could not reproduce that in my local environment, I've got result of the job execution longer than 1 hour, without getting errors.

 

 

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @dzmitry_tt, There are some possible solutions or workarounds for this issue:

  • You can try to use Azure AD auth instead of dbutils.secrets.get(), as suggested by this answer. Azure AD auth uses a different token format and lifetime than dbutils.secrets.get(), which may avoid the expiration error.
  • You can try to configure the token lifetimes in your Databricks workspace settings, as suggested by this post. You can set the maximum token lifetime and refresh interval for your workspace or cluster.
  • You can try to move your job to a different workspace that has access to the same data sources as your original job. This may solve the problem if your job is trying to reach a secret vault or database that is not available in your current workspace.

I hope these suggestions help you resolve your issue. If you have any other questions, please feel free to ask me. 😊

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.