cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

labromb
by Contributor
  • 1072 Views
  • 0 replies
  • 1 kudos

Capturing notebook return codes in databricks jobs

Hi, I currently am running a number of notebook jobs from Azure Data Factory. A new requirement has come up where I need to capture a return code in ADF that has been generated from the note. I tried using  dbutils.notebook.exit(json.dumps({"return_v...

  • 1072 Views
  • 0 replies
  • 1 kudos
RJB
by New Contributor II
  • 11113 Views
  • 6 replies
  • 0 kudos

Resolved! How to pass outputs from a python task to a notebook task

I am trying to create a job which has 2 tasks as follows:A python task which accepts a date and an integer from the user and outputs a list of dates (say, a list of 5 dates in string format).A notebook which runs once for each of the dates from the d...

  • 11113 Views
  • 6 replies
  • 0 kudos
Latest Reply
BilalAslamDbrx
Databricks Employee
  • 0 kudos

Just a note that this feature, Task Values, has been generally available for a while.

  • 0 kudos
5 More Replies
kjoth
by Contributor II
  • 14277 Views
  • 8 replies
  • 3 kudos

Where is the cluster logs of the Databricks Jobs stored.

I'm running a scheduled job on Job clusters. I didnt mention the log location for the cluster. Where can we get the stored logs location. Yes, I can see the logs in the runs, but i need the logs location.

  • 14277 Views
  • 8 replies
  • 3 kudos
Latest Reply
kjoth
Contributor II
  • 3 kudos

Hi @Sai Kalyani P​ , Yes it helped. Thanks

  • 3 kudos
7 More Replies
askme
by New Contributor II
  • 1828 Views
  • 2 replies
  • 2 kudos

Databricks jobs UPdate/Reset API throws an unexpected error

{ "error_code": "INVALID_PARAMETER_VALUE", "message": "Missing required field: job_id"}I have a test job cluster and I need to update the docker image filed with the other version using reset/update job API. I went through the documentation of data b...

  • 1828 Views
  • 2 replies
  • 2 kudos
Latest Reply
Vidula
Honored Contributor
  • 2 kudos

Hey there @radha kilaru​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from yo...

  • 2 kudos
1 More Replies
Mohit_m
by Valued Contributor II
  • 4830 Views
  • 1 replies
  • 2 kudos

Resolved! Databricks jobs create API throws unexpected error

Databricks jobs create API throws unexpected errorError response :{"error_code": "INVALID_PARAMETER_VALUE","message": "Cluster validation error: Missing required field: settings.cluster_spec.new_cluster.size"}Any idea on this?

  • 4830 Views
  • 1 replies
  • 2 kudos
Latest Reply
Mohit_m
Valued Contributor II
  • 2 kudos

Could you please specify num_workers in the json body and try API again.Also, another recommendation can be configuring what you want in UI, and then pressing “JSON” button that should show corresponding JSON which you can use for API

  • 2 kudos
Anonymous
by Not applicable
  • 1383 Views
  • 1 replies
  • 1 kudos

What's the best way to develop Apache Spark Jobs from an IDE (such as IntelliJ/Pycharm)?

A number of people like developing locally using an IDE and then deploying. What are the recommended ways to do that with Databricks jobs?

  • 1383 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

The Databricks Runtime and Apache Spark use the same base API. One can create Spark jobs that run locally and have them run on Databricks with all available Databricks features.It is required that one uses SparkSession.builder.getOrCreate() to create...

  • 1 kudos
Labels