cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Sunil_Patidar
by New Contributor II
  • 193 Views
  • 2 replies
  • 1 kudos

Unable to read from or write to Snowflake Open Catalog via Databricks

I have Snowflake Iceberg tables whose metadata is stored in Snowflake Open Catalog. I am trying to read these tables from the Open Catalog and write back to the Open Catalog using Databricks.I have explored the available documentation but haven’t bee...

  • 193 Views
  • 2 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

Greetings @Sunil_Patidar ,  Databricks and Snowflake can interoperate cleanly around Iceberg today — but how you do it matters. At a high level, interoperability works because both platforms meet at Apache Iceberg and the Iceberg REST Catalog API. Wh...

  • 1 kudos
1 More Replies
ciaran
by New Contributor
  • 49 Views
  • 1 replies
  • 0 kudos

Is GCP Workload Identity Federation supported for BigQuery connections in Azure Databricks?

I’m trying to set up a BigQuery connection in Azure Databricks (Unity Catalog / Lakehouse Federation) using GCP Workload Identity Federation (WIF) instead of a GCP service account keyEnvironment:Azure Databricks workspaceBigQuery query federation via...

  • 49 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 0 kudos

I guess that it is only one accepted as doc say "Google service account key json"

  • 0 kudos
pavelhym
by New Contributor
  • 59 Views
  • 1 replies
  • 1 kudos

Usage of MLFlow models inside Streamlit app in Databricks

I have an issue with loading registered MLflow model into streamlit app inside the DatabricksThis is the sample code used for model load:import mlflowfrom mlflow.tracking import MlflowClientmlflow.set_tracking_uri("databricks")mlflow.set_registry_uri...

  • 59 Views
  • 1 replies
  • 1 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 1 kudos

Authentication context isn’t automatically available in Apps. Notebooks automatically inject workspace host and token for mlflow when you use mlflow.set_tracking_uri("databricks") and mlflow.set_registry_uri("databricks-uc"). In Databricks Apps, you ...

  • 1 kudos
JothyGanesan
by New Contributor III
  • 87 Views
  • 2 replies
  • 4 kudos

Resolved! Vacuum on DLT

We are currently using DLT tables in our target tables. The tables are getting loaded in continuous job pipelines.The liquid cluster is enabled in the tables. Will Vacuum work on these tables when it is getting loaded in continuous mode? How to run t...

  • 87 Views
  • 2 replies
  • 4 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 4 kudos

VACUUM works fine on DLT tables running in continuous mode. DLT does automatic maintenance (OPTIMIZE + VACUUM) roughly every 24 hours if the pipeline has a maintenance cluster configured. Q: The liquid cluster is enabled in the tables. Will Vacuum wo...

  • 4 kudos
1 More Replies
DylanStout
by Contributor
  • 225 Views
  • 4 replies
  • 1 kudos

Catalog tag filter error

When trying to filter in the catalog on "Tag", it throws an error that it failed to load values:The other filters do load:I have tried it with different computes and I have a view that has a tag (as shown in the screenshot).I have the following privi...

DylanStout_0-1764581602517.png DylanStout_1-1764581693590.png DylanStout_2-1764581879449.png
  • 225 Views
  • 4 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @DylanStout! Did the suggestions shared above help resolve your concern? If so, please consider marking the response as the accepted solution.If you found a different approach that worked, sharing it would be helpful for others in the community...

  • 1 kudos
3 More Replies
TFV
by New Contributor
  • 108 Views
  • 1 replies
  • 1 kudos

Regression: Dashboard slicer paste now commits invalid filter values instead of searching

Hi Team,We appear to be experiencing a recent regression in the AI/BI dashboard filter slicer behaviour.Steps to reproduceOpen a dashboard containing a single-select or multi-select filter slicer.Click into the slicer’s text input.Paste text from the...

  • 108 Views
  • 1 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi Tim, I can't find any mention of this internally. But I suspect it will be related to this change  Multi-select filter paste: Viewers can now copy a column of values from a spreadsheet and paste them into a multi-select filter. My recommendation w...

  • 1 kudos
sher_1222
by New Contributor
  • 115 Views
  • 3 replies
  • 0 kudos

Data Ingestions errors

I was going to ingestion Data from website to databricks but it is showing Public DBFS is not enableb message. is there any other way to automate data ingestion to databricks?

  • 115 Views
  • 3 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 0 kudos

Hi @sher_1222 yes you can upload to cloud storage and then connect using unity catalog: Connect to cloud object storage using Unity Catalog - Azure Databricks | Microsoft Learnand then use What is Auto Loader? | Databricks on AWS to automatically ing...

  • 0 kudos
2 More Replies
dj4
by New Contributor
  • 150 Views
  • 3 replies
  • 1 kudos

Azure Databricks UI consuming way too much memory & laggy

This especially happens when the notebook is large with many cells. Even if I clear all the outputs scrolling the notebook is way too laggy. When I start running the code the memory consumption is 3-4GB minimum even if I am not displaying any data/ta...

  • 150 Views
  • 3 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, these are teh recommended troubleshooting steps we have: Troubleshooting & Immediate Workarounds Browser Recommendations: Use an incognito/private window to avoid interference from browser extensions/ad blockers.Monitor memory consumption; close...

  • 1 kudos
2 More Replies
bek04
by New Contributor
  • 122 Views
  • 3 replies
  • 0 kudos

Serverless notebook DNS failure (gai error / name resolution)

I’m using a Databricks workspace on AWS (region: us-west-2). My Serverless notebook (CPU) cannot access any external URL — every outbound request fails at DNS resolution.Minimal test in a notebook:import urllib.requesturllib.request.urlopen("https://...

  • 122 Views
  • 3 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

Hi, Here are some troubleshooting steps: 1. Network Connectivity Configuration (NCC) Confirm that the correct NCC (such as ncc_public_internet) is attached specifically to Serverless compute, not just to SQL Warehouses or other resources.After making...

  • 0 kudos
2 More Replies
confused_dev
by New Contributor II
  • 43585 Views
  • 8 replies
  • 5 kudos

Python mocking dbutils in unittests

I am trying to write some unittests using pytest, but I am coming accross the problem of how to mock my dbutils method when dbutils isn't being defined in my notebook.Is there a way to do this so that I can unit test individual functions that are uti...

  • 43585 Views
  • 8 replies
  • 5 kudos
Latest Reply
kenmyers-8451
Contributor
  • 5 kudos

If this helps anyone here is how we do this:We rely on databricks_test for injecting dbutils into the notebooks that we're testing (which is a 3rd party package mind you and hasn't been updated in a while but still works). And in our notebooks we put...

  • 5 kudos
7 More Replies
fundat
by New Contributor III
  • 123 Views
  • 4 replies
  • 2 kudos

Resolved! st_point is disabled or unsupported.

On my DLT pipeline,I Installed the Databricks-mosaic libraryPhoton is activatedI'm using a workspace premium tierSELECT id, city_name, st_point(latitude, longitude) AS city_point FROM city_data ;st_point is disabled or unsupported. Co...

  • 123 Views
  • 4 replies
  • 2 kudos
Latest Reply
emma_s
Databricks Employee
  • 2 kudos

DLT pipelines are still on runtime 16.4 which doesn't have support for st_point yet. See details here https://learn.microsoft.com/en-us/azure/databricks/release-notes/dlt/ You should be able to use st_point in normal SQL editor as long as the cluster...

  • 2 kudos
3 More Replies
ismaelhenzel
by Contributor III
  • 143 Views
  • 1 replies
  • 0 kudos

Declarative Pipelines - Dynamic Overwrite

Regarding the limitations of declarative pipelines—specifically the inability to use replaceWhere—I discovered through testing that materialized views actually support dynamic overwrites. This handles several scenarios where replaceWhere would typica...

  • 143 Views
  • 1 replies
  • 0 kudos
Latest Reply
omsingh
New Contributor III
  • 0 kudos

This is a really interesting find, and honestly not something most people expect from materialized views.Under the hood, MVs in Databricks declarative pipelines are still Delta tables. So when you set partitionOverwriteMode=dynamic and partition by a...

  • 0 kudos
Ved88
by New Contributor
  • 79 Views
  • 2 replies
  • 0 kudos

power BI Vnet data gateway to Databricks using import mode

we are using Power Bi Vnet data gateway and data source connection as databricks and using import mode.databricks is behind Vnet.refreshing model working fine for 400 records but larger volume throwing errors.i tried with different way kind of increm...

  • 79 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ved88
New Contributor
  • 0 kudos

Hi @szymon_dybczak thanks but that is what we set when we do make power Bi desktop model ,i used this query only and made semantic model in power BI desktop and then we did Publish this into power bi service and do the refresh in web UI there it is f...

  • 0 kudos
1 More Replies
fkseki
by New Contributor III
  • 932 Views
  • 7 replies
  • 7 kudos

Resolved! List budget policies applying filter_by

I'm trying to list budget policies using the parameter "filter_by" to filter policies that start with "aaaa" but I'm getting an error  "400 Bad Request"{'error_code': 'MALFORMED_REQUEST', 'message': "Could not parse request object: Expected 'START_OB...

  • 932 Views
  • 7 replies
  • 7 kudos
Latest Reply
fkseki
New Contributor III
  • 7 kudos

Thanks for the reply, @szymon_dybczak and @lingareddy_Alva.I tried both approaches but none was successful.url = f'{account_url}/api/2.1/accounts/{account_id}/budget-policies'filter_by_json = json.dumps({"policy_name": "aaaa"})params = {"filter_by": ...

  • 7 kudos
6 More Replies
ss_data_eng
by New Contributor
  • 1409 Views
  • 4 replies
  • 0 kudos

Using Lakehouse Federation for SQL Server with Serverless Compute

Hi,My team was able to create a Foreign Catalog that connects to a SQL Server instance hosted on an Azure VM, however when trying to query the catalog, we cannot access it using serverless compute (or a serverless sql warehouse). We have tried lookin...

  • 1409 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ralf
New Contributor II
  • 0 kudos

I'm trying to get something similar to work: Lakehouse Federation for Oracle with SQL warehouse serverless. We are using Azure Databricks and our Oracle DB runs on-prem. I've been able to use classic compute to query the database, but now I'd like to...

  • 0 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels