Hi, Databricks community. I am trying to integrate Databricks shared folder notebooks with Azure DevOps GIT repositories. Can someone please point me to a basic training tutorial (or video) on how to get started and best practices?
Hi,Is it possible to let regular users to see all running notebooks (in the notebook panel of the cluster) on a specific cluster they can use (attach and restart).by default admins can see all running notebooks and users can see only their own notebo...
hi @Philippe CRAVE​ a user can see a notebook only if they have permission to that notebook. Else they won't be able to see it. Unfortunately there is no possibility for a normal user to see the notebooks attached to a cluster if they do not have per...
Hi,I am completely new to the Databricks & have a task to unload the data from Databricks table to the S3 location using java/sql. Is this possible? If yes can you please help me?
I'm trying to restart an existing cluster in Databricks on Azure using databricks-cli.I'm using the following command:databricks clusters restart {"cluster_id": "0710-121255-liner30"}But it gives giving me this error:Error: Missing option "--cluster-...
Can you try:databricks clusters restart --cluster-id <the-cluster-id>$ databricks clusters restart --help
Usage: databricks clusters restart [OPTIONS]
Restarts a Databricks cluster given its ID.
If the cluster is not currently in a RUNNING st...
You’ve gotten familiar with Delta Live Tables (DLT) via the quickstart and getting started guide. Now it’s time to tackle creating a DLT data pipeline for your cloud storage–with one line of code. Here’s how it’ll look when you're starting:CREATE OR ...
Tip #3: Use JSON cluster configurations to access your storage locationKnowledge check: How do I modify DLT settings using JSON? Delta Live Tables settings are expressed as JSON and can be modified in the Delta Live Tables UI [AWS] [Azure][GCP].Examp...
I am trying to create Databricks Jobs and Delta live table(DLT) pipelines by using Databricks API.I would like to have the JSON code of Jobs and DLT in the repository(to configure the code as per environment) and execute the Databricks API by passing...
I would like to know if it is possible to include a specific commit identifier when updating a repo in a workspace via the Databricks CLI.Why? Currently we use the repos CLI to push updates to code throughout dev, test and prod (testing along the wa...
Databricks SQL helps query and visualize data so you can share real-time business insights with built-in dashboards or your favorite BI tools.This post helps you create queries, visualizations and dashboards and connect to your BI tools for deeper da...
Register for Databricks Office HoursAugust 17 & August 31 from 8:00am - 9:00am PT | 3:00pm - 4:00pm GMT.Databricks Office Hours connects you directly with experts to answer your Databricks questions.Join us to: • Troubleshoot your technical questions...
I'm facing problem while connecting Data bricks with AWS cloud watch, I want to send certain logs to cloud watch but seems like there is some connectivity issue between the 2 parties
Hi @Tushar Dua​ , please follow the below blog which has details on how to monitor Databricks using Cloudwatch.How to Monitor Databricks with AWS CloudWatch
@RonVBrown (Customer)​ : Could you please refer below linkhttps://docs.databricks.com/data/data-sources/elasticsearch.htmlPlease try to use opens search library instead of the ES jar if it does not work.https://search.maven.org/artifact/org.opensearc...
Hello,We are using the databricks-sync tool in an attempt to migrate from a legacy workspace into a new E2 account workspace. The tool exports json files successfully, but when I try to import, I receive various Terraform errors referencing undeclar...
Is there a way to define the notebook path based a parameter from the calling notebook using %run? I am aware of dbutils.notebook.run(), but would like to have all the functions defined in the reference notebook to be available in the calling noteboo...
Hey everyone! I'm close but can't seem to figure this out. I'm trying to add 2 notebooks to a Databricks Job. Instead of the first command in both notebooks being a connection to an RDS/Redshift cluster, I'd prefer to make that connection once and ha...
Due to dependencies, if one of our cells errors then we want the notebook to stop executing.We've noticed some odd behaviour when executing notebooks depending on if "Run all cells in this notebook" is selected from the header versus "Run All Below"....
I second this request. It's odd that the behaviour is different when running all vs. running all below. Please make it consistent and document properly.