I'm trying to create a workflow job that fetches the notebook from a remote git repository (Bitbucket cloud)I tried everything in the Path field and nothing is working. Note that the bitbucket repo is connected to databricks already and no issues che...
Hi @harraz (Customer)​ , Could you please confirm if files in repos has been enabled? https://docs.databricks.com/files/workspace.html#configure-support-for-files-in-repos.You can use the command %sh pwd in a notebook inside a repo to check if Files ...
how to setup the path to a remote notebook in bitbucket to run as a jobI tried everything in the path and nothing is workingI keep getting this error:Run result unavailable: run failed with error message
Notebook not found:Note that I already connec...
Hi @mohamed harraz​ , Could you please confirm if files in repos has been enabled? https://docs.databricks.com/files/workspace.html#configure-support-for-files-in-repos.You can use the command %sh pwd in a notebook inside a repo to check if Files in...
I'm wanting to set up some email alerts for issues in the data as a part of a job run. I am wanting to point the user to the notebook that the issue occurred in. I think this would be simple enough but another layer is that the job is going to be run...
Hi, Could you please clarify what do you mean by return the file from the remote repo?Please tag @Debayan​ with your next response which will notify me, Thank you!
Hello Databricks Community,I am seeking assistance understanding the possibility and procedure of implementing a workflow restriction mechanism in Databricks. Our aim is to promote a better workflow management and ensure the quality of the notebooks ...
Hello Nistrate,If I understand the question correctly, the ask is to create an approval framework/workflow for workflows/jobs changes/commits, I don't believe this is currently supported however this can be supported through the use of source control...
I have a dataframe with this format of columns:[`first.second.third` , `alpha.bravo.test1` , `alpha.bravo.test2`]I'd like to get an output dataframe of this:[ `first` | `alpha` ]
---------------...
Hi, Cat! I’m applying for a position at Databricks and was hoping to get some current Brickster insights. I’ve been wanting to join the company for a while!! Thanks in advance
Hi, Adam:Repos CLI does not have specific functionality to create directories in Databricks Repos. Please check the following doc for more information: https://docs.databricks.com/dev-tools/cli/repos-cli.htmlYou cold use run databricks workspace mkd...
Announcing a new portfolio of Generative AI learning offerings on Databricks AcademyToday, we launched new Generative AI, including LLMs, learning offerings for everyone from technical and business leaders to data practitioners, such as Data Scientis...
Would like to extract data like ticket info, resolve time, etc., from ServiceNow in databricks.Not finding much information in community and appreciate your guidance on the same.
ServiceNow offers API capabilities. You can consume the ServiceNow API within a Databricks notebook to extract data from ServiceNow. Following is a suggested prompt to use with ChatGPT for example python code to connect to ServiceNow's api. PROMPT: ...
Could you please guide on how to create the DLT pipeline that directly reads the data from jdbc.When I created the DLT pipeline it give me error at Setting up table, If I ran interactively in notebooks it run successfully, but in non interactive mode...
What you try do to is not possible.dlt uses autoloader, not jdbcno jars (dlt is sql/python only)I'd skip DLT for this scenario and use an ordinary notebook, nothing wrong with that.
Without downloading the files directly every time, you have to create a sql warehouse cluster and connect to it via jdbc connection. This way you just use the requests library in python (or an equal one in another language like axios for javascript) ...
In a normal notebook I would save metadata to my Delta table using the following code:(
df.write
.format("delta")
.mode("overwrite")
.option("userMetadata", user_meta_data)
.saveAsTable("my_table")
)But I couldn't find online how c...
In Delta lab you can set up User MetaData so i will give you some tips from delta import DeltaTable# Create or load your Delta tabledelta_table = DeltaTable.forPath(spark, "path_to_delta_table")# Define your user metadata myccpayuser_meta_data = {"ke...
Hello everybody,I am currently trying to run some performance tests on queries in Databricks on Azure. For my tests, I am using a Classic SQL Warehouse in the SQL Editor. I have created two views that contain the same data but have different structur...
They are probably executing the same query plan now that you say it. And yes, restarting the warehouse does theoretically works but it isnt a nice solution.I guess I will do some restarting and build averages to have a good comparison for now
Hi @Govardhana Reddy​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feed...
com.databricks.backend.common.rpc.SparkDriverExceptions$SQLExecutionException: org.apache.spark.sql.connector.catalog.CatalogNotFoundException: Catalog 'uc-dev' plugin class not found: spark.sql.catalog.uc-dev is not defined
....I get the above when ...
Hi @mohamed harraz​ We haven't heard from you since the last response from @karthik p​ ​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to othe...