While trying to install ffmpeg package using an init script on Databricks cluster, it fails with the below error.Init script:#! /bin/bash
set -e
sudo apt-get update
sudo apt-get -y install ffmpegError message:E: Failed to fetch http://security.ubuntu...
Cause: The VMs are pointing to the cached old mirror which is not up-to-date. Hence there is a problem with downloading the package and it's failing. Workaround: Use the below init script to install the package "ffmpeg". To revert to the original lis...
I need to retrieve job id and run id of the job from a jar file in Scala.When I try to compile below code in IntelliJ, below error is shown.import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
object MainSNL {
@throws(classOf[Exception])
de...
Maybe its worth going through the Task Parameter variables section of the below dochttps://docs.databricks.com/data-engineering/jobs/jobs.html#task-parameter-variables
Databricks jobs create API throws unexpected errorError response :{"error_code": "INVALID_PARAMETER_VALUE","message": "Cluster validation error: Missing required field: settings.cluster_spec.new_cluster.size"}Any idea on this?
Could you please specify num_workers in the json body and try API again.Also, another recommendation can be configuring what you want in UI, and then pressing “JSON” button that should show corresponding JSON which you can use for API
Hi Johan,Were you able to resolve the correlated column exception issue? I have been stuck on this since past week. If you can guide me that will be alot of help.Thanks.
Seems to be a duplicate of your comment on https://community.databricks.com/s/question/0D53f00001XCuCACA1/correlated-column-exception-in-sql-udf-when-using-udf-parameters. I guess you did that to be able to put other tags?
tried using-dbutils.notebook.run(notebook.path, notebook.timeout, notebook.parameters)but it takes 20 seconds to start new session. %run uses same session but cannot figure out how to use it to run notebooks concurrently.
What is something special about databricks.What databricks provides that no other tool in the market provides ?How can I convince some other person to use databricks and not some other tool ?
Hi @Abdullah Durrani, Please read these articles :-https://www.serveradminz.com/blog/databricks-an-advanced-analytics-solution/#:~:text=Databricks%20offers%20a%20highly%20secure,and%20share%20them%20across%20teams.https://www.bluegranite.com/blog/3-...
It’s just a breeze for all the streaming users. What’s the best venue to learn more about it. Is there a Jira ticket that tracks all the progresses? also wonder which Spark version it will come with.
I'm sure this is probably some oversight on my part, but I don't see it. I'm trying to create a delta table with an identity column. I've tried every combination of the syntax I can think of. %sql
create or replace table IDS.picklist
( picklist_id...
Hi @Harris Riaz, I appreciate your attempt to choose the best answer for us. I'm glad you got your query resolved. @rheiman, Thank you for giving an excellent answer .
I have a feature table in BQ that I want to ingest into Delta Lake. This feature table in BQ has 100TB of data. This table can be partitioned by DATE.What best practices and approaches can I take to ingest this 100TB? In particular, what can I do to ...
Hi @Niharika Modi, All materials covered in the course are available on-demand under the Resource tab within the system.My Agenda > select training > Check-in > Resources
Hi @Robin Sabouri , We haven’t heard from you on the last response from @Ralph David Lagos , and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful t...
Hi @Anthony Centeno, Please go through this article which explains your use case:-Spark Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher)
Hi @Francesco Merangolo, We haven’t heard from you on the last response from @Hubert Dudek , and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful to...