cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

pinikrisher
by New Contributor II
  • 134 Views
  • 1 replies
  • 1 kudos

Dashboard tagging

How can i tag dashbaord? i do not see any place to add tagging to it?

  • 134 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @pinikrisher ,Unforutnately, you can't. Tagging is currently supported on catalogs, schemas, tables, table columns, volumes, views, registered models, and model version

  • 1 kudos
s3anil
by New Contributor II
  • 674 Views
  • 6 replies
  • 2 kudos

databricks dashboard deployment error

Hi, i am trying to deploy a dashboard using a bundle and github action. but i am getting an error on CI even though the dashboard is deployed. im using the latest version of CLI from https://raw.githubusercontent.com/databricks/setup-cli/main/install...

  • 674 Views
  • 6 replies
  • 2 kudos
Latest Reply
s3anil
New Contributor II
  • 2 kudos

@szymon_dybczak ,@nayan_wylde , I checked the permissions and the SP has 'can manage' access on the folder.

  • 2 kudos
5 More Replies
heli123
by New Contributor III
  • 269 Views
  • 2 replies
  • 2 kudos

Resolved! Lakehouse monitoring dashboard shows no data

Hello, I am replicating the demo for Lakehouse monitoring found here: https://notebooks.databricks.com/demos/lakehouse-monitoring/index.htmlFor some reason, my dashboards show empty, i.e., they say 'no data' - like nothing fits the criteria from the ...

Data Engineering
lakehouse monitoring
ml monitoring
  • 269 Views
  • 2 replies
  • 2 kudos
Latest Reply
Khaja_Zaffer
Contributor III
  • 2 kudos

Hello @heli123 Can you share the image again? looks like didnt upload well. 

  • 2 kudos
1 More Replies
ashfire
by New Contributor II
  • 364 Views
  • 3 replies
  • 3 kudos

Databricks model serving endpoint returns 403 Unauthorized access to workspace when using service

I deployed a simple Iris model in Databricks Model Serving and exposed it as an endpoint. I’m trying to query the endpoint using a service principal. I can successfully fetch the access token with the following databricks_token() function:def databri...

  • 364 Views
  • 3 replies
  • 3 kudos
Latest Reply
ashfire
New Contributor II
  • 3 kudos

Hi @szymon_dybczak, Thanks for your comment.One of the admins in this workspace tried using the token generated via client id and secret, and were able to successfully get a response from the serving endpoint using this same above mentioned code.Coul...

  • 3 kudos
2 More Replies
aranjan99
by Contributor
  • 451 Views
  • 2 replies
  • 0 kudos

How to switch serverless dlt pipeline to cost optimized mode from performance optimized

We have a few serverless dlt pipelines that we want to optimize for cost as we are ok with an increased latency. Where can I change the pipeline to run on cost optimized mode. I dont see this option in UI or API

  • 451 Views
  • 2 replies
  • 0 kudos
Latest Reply
wawefog260
New Contributor II
  • 0 kudos

Hello!To enable cost-optimized mode for your serverless DLT pipeline, switch it to Triggered mode and edit the schedule trigger—there you’ll find the option to disable “Performance optimized.” This setting isn’t visible in the main UI or API unless t...

  • 0 kudos
1 More Replies
elgeo
by Valued Contributor II
  • 39569 Views
  • 13 replies
  • 6 kudos

SQL Stored Procedure in Databricks

Hello. Is there an equivalent of SQL stored procedure in Databricks? Please note that I need a procedure that allows DML statements and not only Select statement as a function provides.Thank you in advance

  • 39569 Views
  • 13 replies
  • 6 kudos
Latest Reply
SanthoshU
New Contributor II
  • 6 kudos

how to connect the stored procedures to power bi report builder, seems like it is not working 

  • 6 kudos
12 More Replies
MauGomes
by New Contributor
  • 185 Views
  • 1 replies
  • 2 kudos

Resolved! Access to Databricks partner academy

Hi Team,My company is a Databricks Partner. But I can't get registered for Databricks Partner Academy.I have followed the following steps for Partner Academy RegistrationOpen https://partner-academy.databricks.com/learn in your web browser.Click Logi...

  • 185 Views
  • 1 replies
  • 2 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 2 kudos

Hi @MauGomes ,Don't worry, you already did the best thing you could. Check below thread with exact same issue . The user submitted a ticket and it was resolved by service desk. So, just wait patiently for reply Solved: authorized to access https://pa...

  • 2 kudos
zensardigital
by New Contributor II
  • 302 Views
  • 3 replies
  • 0 kudos

Convert a Managed Table to Streaming Table

HiI have applied transformations on a set of streaming tables and saved it as a managed table....How can i change the Managed table to a Streaming table with minimal changesRegardsZD

  • 302 Views
  • 3 replies
  • 0 kudos
Latest Reply
zensardigital
New Contributor II
  • 0 kudos

I am just writing the dataframe to delta table.....Are you suggesting me to first define a STREAMING TABLE (using the DLT definition) and then save the dataframe into that table? 

  • 0 kudos
2 More Replies
Naga05
by New Contributor III
  • 300 Views
  • 4 replies
  • 2 kudos

Databricks app with parameters from databricks asset bundle

HelloooI tried out setting up a Databricks App using asset bundle, where i was able to successfully parameterize the sql warehouse id which was specified on specific targets. However i was unable to get values of other variables from the targets, the...

  • 300 Views
  • 4 replies
  • 2 kudos
Latest Reply
Naga05
New Contributor III
  • 2 kudos

Found that this is an implementation in progress on the Databricks CLI. https://github.com/databricks/cli/issues/3679

  • 2 kudos
3 More Replies
smoortema
by Contributor
  • 334 Views
  • 2 replies
  • 3 kudos

Resolved! handling both Pyspark and Python exceptions

In a Python notebook, I am using error handling according to the official documentation.  try:[some data transformation steps]except PySparkException as ex:[logging steps to log the error condition and error message in a table]However, this catches o...

  • 334 Views
  • 2 replies
  • 3 kudos
Latest Reply
mark_ott
Databricks Employee
  • 3 kudos

To handle both PySpark exceptions and general Python exceptions without double-logging or overwriting error details, the recommended approach is to use multiple except clauses that distinguish the exception type clearly. In Python, exception handlers...

  • 3 kudos
1 More Replies
tom_1
by New Contributor III
  • 1236 Views
  • 5 replies
  • 1 kudos

Resolved! BUG in Job Task of Type DBT

Hi, just wanted to let the Databricks Team know, that there is a bug in the task ui.Currently it is not possible to save a task of "Type: dbt" if the "SQL Warehouse" is set to "None (Manual)".Some weeks ago this was possible, also the "Profiles Direc...

tom_1_0-1741870684542.png tom_1_1-1741870779606.png
  • 1236 Views
  • 5 replies
  • 1 kudos
Latest Reply
Aishu95
New Contributor II
  • 1 kudos

I am facing this bug still. I don't want to select any SQL warehouse, what do I do? and from where can I pass the profiles directory

  • 1 kudos
4 More Replies
Navi991100
by New Contributor II
  • 217 Views
  • 3 replies
  • 1 kudos

I recently made new account on databricks under Free edition

It by default made SQL warehouse compute, but I want all-purpose compute, as I want test and learn capabilities of PySpark and Databricks.I can't connect with the serverless compute in the notebook; it gives a mean  error as follows: "An error occurr...

Navi991100_0-1759078594989.png
  • 217 Views
  • 3 replies
  • 1 kudos
Latest Reply
belforte
New Contributor II
  • 1 kudos

In the free Databricks edition, to use PySpark you need to create and start a cluster, since the SQL Warehouse is only for SQL queries; go to Compute > Create Cluster, set up a free cluster, click Start, and then attach your notebook to it this will ...

  • 1 kudos
2 More Replies
yit
by Contributor III
  • 257 Views
  • 1 replies
  • 1 kudos

How to implement MERGE operations in Lakeflow Declarative Pipelines

Hey everyone,We’ve been using Autoloader extensively for a while, and now we’re looking to transition to full Lakeflow Declarative Pipelines. From what I’ve researched, the reader part seems straightforward and clear.For the writer, I understand that...

  • 257 Views
  • 1 replies
  • 1 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 1 kudos

Hi @yit Lakeflow supports upsert/merge semantics natively for Delta tables unlile ForEachBatchInstead of writing custom forEachBatch code, you declare the merge keys and update logic in your pipeline configuration.Lakeflow will automatically generate...

  • 1 kudos
vishal_balaji
by New Contributor II
  • 458 Views
  • 2 replies
  • 1 kudos

Unable to access metrics from Driver node on localhost:4040

Greetings,I am trying to setup monitoring in Grafana for all my databricks clustersI have added 2 things as part of thisUnder Compute > Configuration > Advanced > Spark > Spark Config, I have addedspark.ui.prometheus.enabled trueUnder init_scripts, I...

  • 458 Views
  • 2 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @vishal_balaji ,You're following guides that were prepared for OSS Apache Spark. For sure localhost won't work in this case because in Databricks all compute is cloud-based. Please follow below guide how to configure it properly on databricks:Data...

  • 1 kudos
1 More Replies
saurabh18cs
by Honored Contributor II
  • 424 Views
  • 4 replies
  • 3 kudos

Autoloader - File Notification Mode

Hello All,We have started to consume source messages/files via autoloader directory listing mode at the moment and want to convert this to file notification mode instead so consumption can be faster with no more entire directories/folder scanning. I ...

  • 424 Views
  • 4 replies
  • 3 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 3 kudos

Hi @K_Anudeep @szymon_dybczak how do i understand a situation when 100 jobs are running in parallel with minimal latency needed. does autoloader directly connect to the cloud queue service ? or databricks stores and manages detected files somewhere? ...

  • 3 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels