cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

him
by New Contributor III
  • 15371 Views
  • 10 replies
  • 7 kudos

i am getting the below error while making a GET request to job in databrick after successfully running it

"error_code": "INVALID_PARAMETER_VALUE",  "message": "Retrieving the output of runs with multiple tasks is not supported. Please retrieve the output of each individual task run instead."}

Capture
  • 15371 Views
  • 10 replies
  • 7 kudos
Latest Reply
SANKET
New Contributor II
  • 7 kudos

Use https://<databricks-instance>/api/2.1/jobs/runs/get?run_id=xxxx."get-output" gives the details of single run id which is associated with the task but not the Job.

  • 7 kudos
9 More Replies
MrJava
by New Contributor III
  • 8409 Views
  • 14 replies
  • 12 kudos

How to know, who started a job run?

Hi there!We have different jobs/workflows configured in our Databricks workspace running on AWS and would like to know who actually started the job run? Are they started by a user or a service principle using curl?Currently one can only see, who is t...

  • 8409 Views
  • 14 replies
  • 12 kudos
Latest Reply
hodb
New Contributor II
  • 12 kudos

for some reason the user_identity.email includes only "unknown" or "Sustem-User"any ideas how to repair to include the name of the person that triggered the job?

  • 12 kudos
13 More Replies
Erik
by Valued Contributor II
  • 3508 Views
  • 3 replies
  • 5 kudos

Expected latency / batch duration for a simple streaming job?

What are "reasonable"/"normal" batch durations for easy (no real processing, just adding a few simple fields) streaming jobs into/from delta lake? We have set up a simple test case here where we are streaming from azure event hub generating a new mes...

  • 3508 Views
  • 3 replies
  • 5 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 5 kudos

This widget could not be displayed.
What are "reasonable"/"normal" batch durations for easy (no real processing, just adding a few simple fields) streaming jobs into/from delta lake? We have set up a simple test case here where we are streaming from azure event hub generating a new mes...

This widget could not be displayed.
  • 5 kudos
This widget could not be displayed.
2 More Replies
ImAbhishekTomar
by New Contributor III
  • 8778 Views
  • 7 replies
  • 4 kudos

kafkashaded.org.apache.kafka.common.errors.TimeoutException: topic-downstream-data-nonprod not present in metadata after 60000 ms.

I am facing an error when trying to write data on Kafka using spark stream.#Extract source_stream_df= (spark.readStream .format("cosmos.oltp.changeFeed") .option("spark.cosmos.container", PARM_CONTAINER_NAME) .option("spark.cosmos.read.inferSchema.en...

  • 8778 Views
  • 7 replies
  • 4 kudos
Latest Reply
devmehta
New Contributor III
  • 4 kudos

What event hub namespace you were using?I had same problem and resolved by changing pricing plan from basic to standard as Kafka apps is not supporting in basic planLet me know if you had anything else. Thanks

  • 4 kudos
6 More Replies
karolinalbinsso
by New Contributor II
  • 2864 Views
  • 2 replies
  • 3 kudos

Resolved! How to access the job-Scheduling Date from within the notebook?

I have created a job that contains a notebook that reads a file from Azure Storage. The file-name contains the date of when the file was transferred to the storage. A new file arrives every Monday, and the read-job is scheduled to run every Monday. I...

  • 2864 Views
  • 2 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

Hi, I guess the files are in the same directory structure so that you can use cloud files autoloader. It will incrementally read only new files https://docs.microsoft.com/en-us/azure/databricks/spark/latest/structured-streaming/auto-loaderSo it will ...

  • 3 kudos
1 More Replies
kjoth
by Contributor II
  • 17375 Views
  • 9 replies
  • 7 kudos

How to make the job fail via code after handling exception

Hi , We are capturing the exception if an error occurs using try except. But we want the job status to be failed once we got the exception. Whats the best way to do that. We are using pyspark.

  • 17375 Views
  • 9 replies
  • 7 kudos
Latest Reply
kumar_ravi
New Contributor III
  • 7 kudos

you can do some hack arround   dbutils = get_dbutils(spark)    tables_with_exceptions = []    for table_config in table_configs:        try:            process(spark, table_config)        except Exception as e:            exception_detail = f"Error p...

  • 7 kudos
8 More Replies
hanish
by New Contributor II
  • 2919 Views
  • 5 replies
  • 2 kudos

Job cluster support in jobs/runs/submit API

We are using jobs/runs/submit API of databricks to create and trigger a one-time run with new_cluster and existing_cluster configuration. We would like to check if there is provision to pass "job_clusters" in this API to reuse the same cluster across...

  • 2919 Views
  • 5 replies
  • 2 kudos
Latest Reply
Nagrjuna
New Contributor II
  • 2 kudos

Hi, Any update on the above mentioned issue? Unable to submit a one time new job run (api/2.0 or 21/jobs/runs/submit) with shared job cluster or one new cluster has to be used for all TASKs in the job 

  • 2 kudos
4 More Replies
Mohit_m
by Valued Contributor II
  • 22981 Views
  • 3 replies
  • 4 kudos

Resolved! How to get the Job ID and Run ID and save into a database

We are having Databricks Job running with main class and JAR file in it. Our JAR file code base is in Scala. Now, when our job starts running, we need to log Job ID and Run ID into the database for future purpose. How can we achieve this?

  • 22981 Views
  • 3 replies
  • 4 kudos
Latest Reply
Bruno-Castro
New Contributor II
  • 4 kudos

That article is for members only, can we also specify here how to do it (for those that are not medium members?). Thanks!

  • 4 kudos
2 More Replies
cmilligan
by Contributor II
  • 3126 Views
  • 3 replies
  • 3 kudos

Dropdown for parameters in a job

I want to be able to denote the type of run from a predetermined list of values that a user can choose from when kicking off a run using different parameters. Our team does standardized job runs on a weekly cadence but can have timeframes that change...

  • 3126 Views
  • 3 replies
  • 3 kudos
Latest Reply
dev56
New Contributor II
  • 3 kudos

Hi @cmilligan , I have a similar requirement and would really be grateful if you could provide me with any information on how to fix this issue. Thanks a lot!

  • 3 kudos
2 More Replies
lstk
by New Contributor
  • 2456 Views
  • 2 replies
  • 1 kudos

Resolved! Job ID value out of range - Azure Logic App Connector

Hello everybody,i tried to build a Logic App Custom Connector following this one explanation. (https://medium.com/@poojaanilshinde/create-azure-logic-apps-custom-connector-for-azure-databricks-e51f4524ab27)Now i run in the following Problem and wante...

image.png
  • 2456 Views
  • 2 replies
  • 1 kudos
Latest Reply
stefnhuy
New Contributor III
  • 1 kudos

Hey Lukas,I can totally relate to the frustration of encountering those confounding errors when building custom connectors in Azure Logic Apps. The "Job ID value out of range" issue can be quite perplexing, but fear not, for there's a solution on the...

  • 1 kudos
1 More Replies
brickster_2018
by Databricks Employee
  • 2680 Views
  • 2 replies
  • 0 kudos

Resolved! The driver is temporarily unavailable

My job fails with Driver is temporarily unavailable. Apparently, it's permanently unavailable, because the job is not pausing but failing.

  • 2680 Views
  • 2 replies
  • 0 kudos
Latest Reply
Chalki
New Contributor III
  • 0 kudos

I am facing the same issues .  I am writing in batches using a simple for loop. I don't have any collect statements inside the loop. I am rewriting the partitions with partition overwrite dynamic mode in a huge wide delta table - several tb. The incr...

  • 0 kudos
1 More Replies
ravi28
by New Contributor III
  • 14645 Views
  • 7 replies
  • 8 kudos

How to setup Job notifications using Microsoft Teams webhook ?

Couple of things I tried:1. I created a webhook connector in msft teams and copied it Notifications destinations via Admin page -> New destination -> from dropdown I selected Microsoft teams -> added webhook url and saved it.outcome: I don't get the ...

  • 14645 Views
  • 7 replies
  • 8 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 8 kudos

You can set up job notifications for Databricks jobs using Microsoft Teams webhooks by following these steps:Set up a Microsoft Teams webhook:Go to the channel where you want to receive notifications in Microsoft Teams.Click on the "..." icon next to...

  • 8 kudos
6 More Replies
dave_hiltbrand
by New Contributor II
  • 5142 Views
  • 3 replies
  • 0 kudos

I have a job with multiple tasks running asynchronously and I don't think its leveraging all the nodes on the cluster based on runtime.

I have a job with multiple tasks running asynchronously and I don't think its leveraging all the nodes on the cluster based on runtime. I open the Spark UI for the cluster and checkout the executors and don't see any tasks for my worker nodes. How ca...

  • 5142 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Dave Hiltbrand​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 0 kudos
2 More Replies
Data_Analytics1
by Contributor III
  • 2240 Views
  • 1 replies
  • 0 kudos

Getting JsonParseException: Unexpected character ('<' (code 60))

I have a scheduled job that is executed using a notebook. Within one of the notebook cells, there is a check to determine if a table exists. However, even when the table does exist, it incorrectly identifies it as non-existent and proceeds to execut...

  • 2240 Views
  • 1 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Mahesh Chahare​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.

  • 0 kudos
Pras1
by New Contributor II
  • 7799 Views
  • 2 replies
  • 2 kudos

Resolved! AZURE_QUOTA_EXCEEDED_EXCEPTION - even with more than vCPUs than Databricks recommends

I am running this Delta Live Tables PoC from databricks-industry-solutions/industry-solutions-blueprintshttps://github.com/databricks-industry-solutions/pos-dltI have Standard_DS4_v2 with 28GB and 8 cores x 2 workers - so a total of 16 cores. This is...

  • 7799 Views
  • 2 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Prasenjit Biswas​ We haven't heard from you since the last response from @Jose Gonzalez​ â€‹ . Kindly share the information with us, and in return, we will provide you with the necessary solution.Thanks and Regards

  • 2 kudos
1 More Replies
Labels