cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

CHorton
by Visitor
  • 25 Views
  • 2 replies
  • 1 kudos

Calling a function with parameters via Spark ODBC driver

Hi All,I am having an issue with calling a Databricks SQL user defined function with parameters from my client application using the Spark ODBC driver.I have been able to execute a straight SQL statement using parameters e.g. SELECT * FROM Customer W...

  • 25 Views
  • 2 replies
  • 1 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 1 kudos

Hi @CHorton The Databricks SQL engine does not support positional (?) parameters inside SQL UDF calls.  When Spark SQL parses GetCustomerData(?), the parameter is unresolved at analysis time, so you get [UNBOUND_SQL_PARAMETER]. This is not an ODBC bu...

  • 1 kudos
1 More Replies
mukul1409
by Visitor
  • 78 Views
  • 2 replies
  • 1 kudos

Resolved! Iceberg interoperability between Databricks and external catalogs

I would like to understand the current approach for Iceberg interoperability in Databricks. Databricks supports Iceberg using Unity Catalog, but many teams also use Iceberg tables managed outside Databricks. Are there recommended patterns today for s...

  • 78 Views
  • 2 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, Databricks has just released into Public preview the ability to federate connect into Iceberg external tables. This would be the recommended way to access data into these tables and treat them as a foreign table. See docs here https://docs.databr...

  • 1 kudos
1 More Replies
hnnhhnnh
by New Contributor II
  • 40 Views
  • 1 replies
  • 0 kudos

Title: How to handle type widening (int→bigint) in DLT streaming tables without dropping the table

SetupBronze source table (external to DLT, CDF & type widening enabled):# Source table properties:# delta.enableChangeDataFeed: "true"# delta.enableDeletionVectors: "true"# delta.enableTypeWidening: "true"# delta.minReaderVersion: "3"# delta.minWrite...

  • 40 Views
  • 1 replies
  • 0 kudos
Latest Reply
mukul1409
Visitor
  • 0 kudos

Hi @hnnhhnnh DLT streaming tables that use apply changes do not support widening the data type of key columns such as changing an integer to a bigint after the table is created. Even though Delta and Unity Catalog support type widening in general, DL...

  • 0 kudos
Sunil_Patidar
by New Contributor II
  • 283 Views
  • 3 replies
  • 1 kudos

Unable to read from or write to Snowflake Open Catalog via Databricks

I have Snowflake Iceberg tables whose metadata is stored in Snowflake Open Catalog. I am trying to read these tables from the Open Catalog and write back to the Open Catalog using Databricks.I have explored the available documentation but haven’t bee...

  • 283 Views
  • 3 replies
  • 1 kudos
Latest Reply
mukul1409
Visitor
  • 1 kudos

Databricks does not currently provide official support to read from or write to Snowflake Open Catalog. Although Snowflake Open Catalog is compatible with the Iceberg REST catalog and open source Spark can work with it, this integration is not suppor...

  • 1 kudos
2 More Replies
Loinguyen318
by New Contributor II
  • 2452 Views
  • 4 replies
  • 0 kudos

Resolved! Public DBFS root is disabled in Databricks free edition

I am using notebook to execute a sample spark to write delta table in dbfs using free edition. However, I face an issue, that I can not access the public DBFS after the code executed.The spark code such as:data = spark.range(0, 5)data.write.format("d...

  • 2452 Views
  • 4 replies
  • 0 kudos
Latest Reply
mukul1409
Visitor
  • 0 kudos

Yes ,  use UCVolumes instead of DBFS. As Databricks moves toward a serverless architecture, DBFS access is being increasingly restricted and is not intended for long term or production usage. UC Volumes are a better choice than DBFS.

  • 0 kudos
3 More Replies
Yogesh_Verma_
by Contributor II
  • 74 Views
  • 1 replies
  • 4 kudos

Meta Ads is now a native data source in Databricks Lakeflow Connect

Meta Ads is now a native data source in DatabricksDatabricks just announced a Meta Ads connector (Beta) powered by Lakeflow Connect, making it easy to ingest advertising data directly into Databricks—no custom APIs, no CSV exports, no brittle scripts...

Yogesh_Verma__0-1766916082806.png
  • 74 Views
  • 1 replies
  • 4 kudos
Latest Reply
mukul1409
Visitor
  • 4 kudos

Great update :  this is a big win for analytics teams.The new Meta Ads native connector (Beta) in Databricks, powered by Lakeflow Connect, significantly reduces the operational burden that typically comes with ad-tech ingestion.Excited to see Lakeflo...

  • 4 kudos
Phani1
by Databricks MVP
  • 2086 Views
  • 2 replies
  • 1 kudos

Resolved! Databricks - Calling dashboard another dashboard..

Hi Team ,Can we call the dashboard from another dashboard? An example screenshot is attached.Main Dashboard has 3 buttons that point to 3 different dashboards and if we click any of the buttons it has to redirect to the respective dashboard.

  • 2086 Views
  • 2 replies
  • 1 kudos
Latest Reply
thains
New Contributor III
  • 1 kudos

I would also like to see this feature added.

  • 1 kudos
1 More Replies
ciaran
by New Contributor
  • 111 Views
  • 1 replies
  • 0 kudos

Is GCP Workload Identity Federation supported for BigQuery connections in Azure Databricks?

I’m trying to set up a BigQuery connection in Azure Databricks (Unity Catalog / Lakehouse Federation) using GCP Workload Identity Federation (WIF) instead of a GCP service account keyEnvironment:Azure Databricks workspaceBigQuery query federation via...

  • 111 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 0 kudos

I guess that it is only one accepted as doc say "Google service account key json"

  • 0 kudos
pavelhym
by New Contributor
  • 115 Views
  • 1 replies
  • 1 kudos

Usage of MLFlow models inside Streamlit app in Databricks

I have an issue with loading registered MLflow model into streamlit app inside the DatabricksThis is the sample code used for model load:import mlflowfrom mlflow.tracking import MlflowClientmlflow.set_tracking_uri("databricks")mlflow.set_registry_uri...

  • 115 Views
  • 1 replies
  • 1 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 1 kudos

Authentication context isn’t automatically available in Apps. Notebooks automatically inject workspace host and token for mlflow when you use mlflow.set_tracking_uri("databricks") and mlflow.set_registry_uri("databricks-uc"). In Databricks Apps, you ...

  • 1 kudos
JothyGanesan
by New Contributor III
  • 155 Views
  • 2 replies
  • 4 kudos

Resolved! Vacuum on DLT

We are currently using DLT tables in our target tables. The tables are getting loaded in continuous job pipelines.The liquid cluster is enabled in the tables. Will Vacuum work on these tables when it is getting loaded in continuous mode? How to run t...

  • 155 Views
  • 2 replies
  • 4 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 4 kudos

VACUUM works fine on DLT tables running in continuous mode. DLT does automatic maintenance (OPTIMIZE + VACUUM) roughly every 24 hours if the pipeline has a maintenance cluster configured. Q: The liquid cluster is enabled in the tables. Will Vacuum wo...

  • 4 kudos
1 More Replies
DylanStout
by Contributor
  • 248 Views
  • 4 replies
  • 1 kudos

Catalog tag filter error

When trying to filter in the catalog on "Tag", it throws an error that it failed to load values:The other filters do load:I have tried it with different computes and I have a view that has a tag (as shown in the screenshot).I have the following privi...

DylanStout_0-1764581602517.png DylanStout_1-1764581693590.png DylanStout_2-1764581879449.png
  • 248 Views
  • 4 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @DylanStout! Did the suggestions shared above help resolve your concern? If so, please consider marking the response as the accepted solution.If you found a different approach that worked, sharing it would be helpful for others in the community...

  • 1 kudos
3 More Replies
TFV
by New Contributor
  • 140 Views
  • 1 replies
  • 1 kudos

Regression: Dashboard slicer paste now commits invalid filter values instead of searching

Hi Team,We appear to be experiencing a recent regression in the AI/BI dashboard filter slicer behaviour.Steps to reproduceOpen a dashboard containing a single-select or multi-select filter slicer.Click into the slicer’s text input.Paste text from the...

  • 140 Views
  • 1 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi Tim, I can't find any mention of this internally. But I suspect it will be related to this change  Multi-select filter paste: Viewers can now copy a column of values from a spreadsheet and paste them into a multi-select filter. My recommendation w...

  • 1 kudos
sher_1222
by New Contributor
  • 132 Views
  • 3 replies
  • 0 kudos

Data Ingestions errors

I was going to ingestion Data from website to databricks but it is showing Public DBFS is not enableb message. is there any other way to automate data ingestion to databricks?

  • 132 Views
  • 3 replies
  • 0 kudos
Latest Reply
saurabh18cs
Honored Contributor II
  • 0 kudos

Hi @sher_1222 yes you can upload to cloud storage and then connect using unity catalog: Connect to cloud object storage using Unity Catalog - Azure Databricks | Microsoft Learnand then use What is Auto Loader? | Databricks on AWS to automatically ing...

  • 0 kudos
2 More Replies
dj4
by New Contributor
  • 178 Views
  • 3 replies
  • 1 kudos

Azure Databricks UI consuming way too much memory & laggy

This especially happens when the notebook is large with many cells. Even if I clear all the outputs scrolling the notebook is way too laggy. When I start running the code the memory consumption is 3-4GB minimum even if I am not displaying any data/ta...

  • 178 Views
  • 3 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, these are teh recommended troubleshooting steps we have: Troubleshooting & Immediate Workarounds Browser Recommendations: Use an incognito/private window to avoid interference from browser extensions/ad blockers.Monitor memory consumption; close...

  • 1 kudos
2 More Replies
bek04
by New Contributor
  • 156 Views
  • 3 replies
  • 0 kudos

Serverless notebook DNS failure (gai error / name resolution)

I’m using a Databricks workspace on AWS (region: us-west-2). My Serverless notebook (CPU) cannot access any external URL — every outbound request fails at DNS resolution.Minimal test in a notebook:import urllib.requesturllib.request.urlopen("https://...

  • 156 Views
  • 3 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

Hi, Here are some troubleshooting steps: 1. Network Connectivity Configuration (NCC) Confirm that the correct NCC (such as ncc_public_internet) is attached specifically to Serverless compute, not just to SQL Warehouses or other resources.After making...

  • 0 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels