cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kALYAN5
by Visitor
  • 53 Views
  • 4 replies
  • 2 kudos

Service Principal

Can two service principal have same name,but unique id's ?

  • 53 Views
  • 4 replies
  • 2 kudos
Latest Reply
emma_s
Databricks Employee
  • 2 kudos

Hi @kALYAN5,  Here is an explanation of why service principals share a name but IDs are unique: Names Are for Human Readability: Organizations use human-friendly names like "automation-batch-job" or "databricks-ci-cd" to make it easy for admins to re...

  • 2 kudos
3 More Replies
bsr
by New Contributor
  • 19 Views
  • 1 replies
  • 0 kudos

DBR 17.3.3 introduced unexpected DEBUG logs from ThreadMonitor – how to disable?

After upgrading from DBR 17.3.2 to DBR 17.3.3, we started seeing a flood of DEBUG logs like this in job outputs:```DEBUG:ThreadMonitor:Logging python thread stack frames for MainThread and py4j threads: DEBUG:ThreadMonitor:Logging Thread-8 (run) stac...

  • 19 Views
  • 1 replies
  • 0 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 0 kudos

Hi @bsr , there was some internal discussion on this going on and I got to know that these DEBUG thread-dump lines from the ThreadMonitor started leaking to stderr/job output due to a Python logger misconfiguration introduced in the 17.3.3 branch. Th...

  • 0 kudos
Ligaya
by New Contributor II
  • 57264 Views
  • 7 replies
  • 2 kudos

ValueError: not enough values to unpack (expected 2, got 1)

Code:Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, ...

  • 57264 Views
  • 7 replies
  • 2 kudos
Latest Reply
mukul1409
New Contributor
  • 2 kudos

The error happens because the function expects the table name to include both schema and table separated by a dot. Inside the function it splits the table name using a dot and tries to assign two values. When you pass only Economy, the split returns ...

  • 2 kudos
6 More Replies
ripa1
by New Contributor
  • 133 Views
  • 4 replies
  • 4 kudos

Is anyone getting up and working ? Federating Snowflake-managed Iceberg tables into Azure Databricks

I'm federating Snowflake-managed Iceberg tables into Azure Databricks Unity Catalog to query the same data from both platforms without copying it. I am getting weird error message when query table from Databricks and i have tried to put all nicely in...

Data Engineering
azure
Iceberg
snowflake
unity-catalog
  • 133 Views
  • 4 replies
  • 4 kudos
Latest Reply
ripa1
New Contributor
  • 4 kudos

Thanks Hubert. I did check the Iceberg metadata location and Databricks can list the files, but the issue is that Snowflake’s Iceberg metadata.json contains paths like abfss://…@<acct>.blob.core.windows.net/..., and on UC Serverless Databricks then t...

  • 4 kudos
3 More Replies
Askenm
by New Contributor
  • 1113 Views
  • 6 replies
  • 4 kudos

Docker tab missing in create compute

I am running databricks premium and looking to create a compute running conda. It seems that the best way to do this is to boot the compute from a docker image. However, in the ```create_compute > advanced``` I cannot see the the docker option nor ca...

Data Engineering
conda
Docker
  • 1113 Views
  • 6 replies
  • 4 kudos
Latest Reply
mukul1409
New Contributor
  • 4 kudos

Hi @Askenm In Databricks Premium, the Docker option for custom images is not available on all compute types and is not controlled by user level permissions. Custom Docker images are only supported on Databricks clusters that use the legacy VM based c...

  • 4 kudos
5 More Replies
CHorton
by New Contributor
  • 127 Views
  • 3 replies
  • 2 kudos

Resolved! Calling a function with parameters via Spark ODBC driver

Hi All,I am having an issue with calling a Databricks SQL user defined function with parameters from my client application using the Spark ODBC driver.I have been able to execute a straight SQL statement using parameters e.g. SELECT * FROM Customer W...

  • 127 Views
  • 3 replies
  • 2 kudos
Latest Reply
iyashk-DB
Databricks Employee
  • 2 kudos

Hi @CHorton The Databricks SQL engine does not support positional (?) parameters inside SQL UDF calls.  When Spark SQL parses GetCustomerData(?), the parameter is unresolved at analysis time, so you get [UNBOUND_SQL_PARAMETER]. This is not an ODBC bu...

  • 2 kudos
2 More Replies
Harun
by Honored Contributor
  • 12650 Views
  • 2 replies
  • 3 kudos

How to change the number of executors instances in databricks

I know that Databricks runs one executor per worker node. Can i change the no.of.exectors by adding params (spark.executor.instances) in the cluster advance option? and also can i pass this parameter when i schedule a task, so that particular task wi...

  • 12650 Views
  • 2 replies
  • 3 kudos
Latest Reply
RandiMacGyver
New Contributor II
  • 3 kudos

In Databricks, the executor model is largely managed by the platform itself. On Databricks clusters, each worker node typically runs a single Spark executor, and this behavior is intentional.

  • 3 kudos
1 More Replies
liquibricks
by Contributor
  • 105 Views
  • 3 replies
  • 3 kudos

Resolved! Spark verison errors in "Build an ETL pipeline with Lakeflow Spark Declarative Pipelines"

I'm trying to define a job for a pipeline using the Asset Bundle Python SDK. I created the pipeline first (using the SDK) and i'm now trying to add the Job. The DAB validates and deploys successfully, but when I run the Job i get an error: UNAUTHORIZ...

  • 105 Views
  • 3 replies
  • 3 kudos
Latest Reply
mukul1409
New Contributor
  • 3 kudos

This happens because the job is not actually linked to the deployed pipeline and the pipeline id is None at runtime. When using Asset Bundles, the pipeline id is only resolved after deployment, so referencing my_pipeline.id in code does not work. Ins...

  • 3 kudos
2 More Replies
mukul1409
by New Contributor
  • 149 Views
  • 3 replies
  • 1 kudos

Resolved! Iceberg interoperability between Databricks and external catalogs

I would like to understand the current approach for Iceberg interoperability in Databricks. Databricks supports Iceberg using Unity Catalog, but many teams also use Iceberg tables managed outside Databricks. Are there recommended patterns today for s...

  • 149 Views
  • 3 replies
  • 1 kudos
Latest Reply
Yogesh_Verma_
Contributor II
  • 1 kudos

Great

  • 1 kudos
2 More Replies
hnnhhnnh
by New Contributor II
  • 84 Views
  • 1 replies
  • 0 kudos

Title: How to handle type widening (int→bigint) in DLT streaming tables without dropping the table

SetupBronze source table (external to DLT, CDF & type widening enabled):# Source table properties:# delta.enableChangeDataFeed: "true"# delta.enableDeletionVectors: "true"# delta.enableTypeWidening: "true"# delta.minReaderVersion: "3"# delta.minWrite...

  • 84 Views
  • 1 replies
  • 0 kudos
Latest Reply
mukul1409
New Contributor
  • 0 kudos

Hi @hnnhhnnh DLT streaming tables that use apply changes do not support widening the data type of key columns such as changing an integer to a bigint after the table is created. Even though Delta and Unity Catalog support type widening in general, DL...

  • 0 kudos
Sunil_Patidar
by New Contributor II
  • 313 Views
  • 3 replies
  • 2 kudos

Unable to read from or write to Snowflake Open Catalog via Databricks

I have Snowflake Iceberg tables whose metadata is stored in Snowflake Open Catalog. I am trying to read these tables from the Open Catalog and write back to the Open Catalog using Databricks.I have explored the available documentation but haven’t bee...

  • 313 Views
  • 3 replies
  • 2 kudos
Latest Reply
mukul1409
New Contributor
  • 2 kudos

Databricks does not currently provide official support to read from or write to Snowflake Open Catalog. Although Snowflake Open Catalog is compatible with the Iceberg REST catalog and open source Spark can work with it, this integration is not suppor...

  • 2 kudos
2 More Replies
Loinguyen318
by New Contributor II
  • 2479 Views
  • 4 replies
  • 0 kudos

Resolved! Public DBFS root is disabled in Databricks free edition

I am using notebook to execute a sample spark to write delta table in dbfs using free edition. However, I face an issue, that I can not access the public DBFS after the code executed.The spark code such as:data = spark.range(0, 5)data.write.format("d...

  • 2479 Views
  • 4 replies
  • 0 kudos
Latest Reply
mukul1409
New Contributor
  • 0 kudos

Yes ,  use UCVolumes instead of DBFS. As Databricks moves toward a serverless architecture, DBFS access is being increasingly restricted and is not intended for long term or production usage. UC Volumes are a better choice than DBFS.

  • 0 kudos
3 More Replies
Phani1
by Databricks MVP
  • 2087 Views
  • 2 replies
  • 1 kudos

Resolved! Databricks - Calling dashboard another dashboard..

Hi Team ,Can we call the dashboard from another dashboard? An example screenshot is attached.Main Dashboard has 3 buttons that point to 3 different dashboards and if we click any of the buttons it has to redirect to the respective dashboard.

  • 2087 Views
  • 2 replies
  • 1 kudos
Latest Reply
thains
New Contributor III
  • 1 kudos

I would also like to see this feature added.

  • 1 kudos
1 More Replies
ciaran
by New Contributor
  • 125 Views
  • 1 replies
  • 0 kudos

Is GCP Workload Identity Federation supported for BigQuery connections in Azure Databricks?

I’m trying to set up a BigQuery connection in Azure Databricks (Unity Catalog / Lakehouse Federation) using GCP Workload Identity Federation (WIF) instead of a GCP service account keyEnvironment:Azure Databricks workspaceBigQuery query federation via...

  • 125 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 0 kudos

I guess that it is only one accepted as doc say "Google service account key json"

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels