cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

carlosancassani
by New Contributor III
  • 1141 Views
  • 3 replies
  • 5 kudos

Error: Credential size is more than configured size limit. As a result credential passthrough won't work for this notebook run.

I get this error when trying to execute parallel slave notebook from a Pyspark "master notebook".note 1: I use same class, functions, cluster, credential for another use case of parallel notebook in the same databricks instance and it works fine.note...

image
  • 1141 Views
  • 3 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

Hi @carlosancassani​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 5 kudos
2 More Replies
cmilligan
by Contributor II
  • 1846 Views
  • 1 replies
  • 9 kudos

Resolved! Catch when a notebook fails and terminate command in threaded parallel notebook run

I have a command that is running notebooks in parallel using threading. I want the command to fail whenever one of the notebooks that is running fails. Right now it is just continuing to run the command.Below is the command line that I'm currently ru...

  • 1846 Views
  • 1 replies
  • 9 kudos
Latest Reply
Kaniz
Community Manager
  • 9 kudos

Hi @Coleman Milligan​,You can run multiple Azure Databricks notebooks in parallel by using the dbutils library.Here is a python code based on the sample code from the Azure Databricks documentation on running notebooks concurrently and on Notebook wo...

  • 9 kudos
benydc
by New Contributor II
  • 611 Views
  • 0 replies
  • 1 kudos

Is it possible to connect to IPython Kernel from local or client outside databricks cluster?

When looking in the standard output of a notebook run in a cluster, we get this message: "To connect another client to this kernel, use: /databricks/kernel-connections-dj8dj93d3d3.json"Is it possible to connect to the databricks ipython kernel and ma...

  • 611 Views
  • 0 replies
  • 1 kudos
darshan
by New Contributor III
  • 854 Views
  • 3 replies
  • 1 kudos

job init takes longer than notebook run

I am trying to understand why running a job takes longer than running the notebook manually.And if I try to run jobs concurrently using workflow or threads then is there a way to reduce job init time ?

  • 854 Views
  • 3 replies
  • 1 kudos
Latest Reply
Vivian_Wilfred
Honored Contributor
  • 1 kudos

Hi @darshan doshi​ , Jobs creates a job cluster in the backend before it starts the task execution and this cluster creation may take extra time when compared to running a notebook on a existing cluster.1) If you run a multi-task job, you could selec...

  • 1 kudos
2 More Replies
Laniel
by New Contributor
  • 584 Views
  • 1 replies
  • 0 kudos

‘How do you get cost of a notebook run?’

‘How do you get cost of a notebook run?’

  • 584 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rheiman
Contributor II
  • 0 kudos

You can check your cloud provider's portal. Go to the subscription > costs field and you should be able to see the costs of the VMs and Databricks. For more granular information, consider installing overwatch.Environment Setup :: Overwatch (databrick...

  • 0 kudos
Labels