cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

carlosancassani
by New Contributor III
  • 2069 Views
  • 3 replies
  • 5 kudos

Error: Credential size is more than configured size limit. As a result credential passthrough won't work for this notebook run.

I get this error when trying to execute parallel slave notebook from a Pyspark "master notebook".note 1: I use same class, functions, cluster, credential for another use case of parallel notebook in the same databricks instance and it works fine.note...

image
  • 2069 Views
  • 3 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

Hi @carlosancassani​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 5 kudos
2 More Replies
benydc
by New Contributor II
  • 998 Views
  • 0 replies
  • 2 kudos

Is it possible to connect to IPython Kernel from local or client outside databricks cluster?

When looking in the standard output of a notebook run in a cluster, we get this message: "To connect another client to this kernel, use: /databricks/kernel-connections-dj8dj93d3d3.json"Is it possible to connect to the databricks ipython kernel and ma...

  • 998 Views
  • 0 replies
  • 2 kudos
darshan
by New Contributor III
  • 1538 Views
  • 2 replies
  • 1 kudos

job init takes longer than notebook run

I am trying to understand why running a job takes longer than running the notebook manually.And if I try to run jobs concurrently using workflow or threads then is there a way to reduce job init time ?

  • 1538 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vivian_Wilfred
Databricks Employee
  • 1 kudos

Hi @darshan doshi​ , Jobs creates a job cluster in the backend before it starts the task execution and this cluster creation may take extra time when compared to running a notebook on a existing cluster.1) If you run a multi-task job, you could selec...

  • 1 kudos
1 More Replies
Laniel
by New Contributor
  • 1106 Views
  • 1 replies
  • 0 kudos

‘How do you get cost of a notebook run?’

‘How do you get cost of a notebook run?’

  • 1106 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rheiman
Contributor II
  • 0 kudos

You can check your cloud provider's portal. Go to the subscription > costs field and you should be able to see the costs of the VMs and Databricks. For more granular information, consider installing overwatch.Environment Setup :: Overwatch (databrick...

  • 0 kudos
Labels