cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

hdu
by New Contributor II
  • 931 Views
  • 1 replies
  • 1 kudos

Resolved! Change cluster owner API call failed

I am trying to change cluster's owner using API call. but get following error. I am positive that host, cluster_id and owner_username are all correct. The error message says No API found. Is this related with the compute I am using? or something else...

hdu_0-1742837197352.png
  • 931 Views
  • 1 replies
  • 1 kudos
Latest Reply
Brahmareddy
Esteemed Contributor
  • 1 kudos

Hi hdu,How are you doing today?, As per my understanding, It sounds like you’re really close! That “No API found” error usually means either the wrong API endpoint is being used, or the cluster type doesn’t support ownership changes—for example, shar...

  • 1 kudos
Shivap
by New Contributor III
  • 1665 Views
  • 4 replies
  • 3 kudos

What's the recommended way of creating tables in Databricks with unity catalog (External/Managed)

I have databricks with unity catalog enables and created an external ADLS location. when I create the catalog/schema it uses the external location. when I try to create the table it uses the extrenal location but they are managed tables. What's the r...

  • 1665 Views
  • 4 replies
  • 3 kudos
Latest Reply
Brahmareddy
Esteemed Contributor
  • 3 kudos

Hi Shivap,How are you doing today?, As per my understanding, in Unity Catalog, if you want to create an external table, you just need to make sure the external location is registered and approved first. Even though you're specifying a path with LOCAT...

  • 3 kudos
3 More Replies
bidek56
by Contributor
  • 2627 Views
  • 8 replies
  • 2 kudos

Resolved! When will DB release runtime with Scala 2.13

When will DB release runtime with Scala 2.13? Thx

  • 2627 Views
  • 8 replies
  • 2 kudos
Latest Reply
JoseSoto
New Contributor III
  • 2 kudos

Spark 4 is coming and it's only going to support Scala 2.13, so a Databricks Runtime with Spark 3.5.x and Scala 2.13 should be released soonish.

  • 2 kudos
7 More Replies
samye760
by New Contributor II
  • 3262 Views
  • 1 replies
  • 1 kudos

Job Retry Wait Policy and Cluster Shutdown

Hi all,I have a Databricks Workflow job in which the final task makes an external API call. Sometimes this API will be overloaded and the call will fail. In the spirit of automation, I want this task to retry the call an hour later if it fails in the...

Data Engineering
clusters
jobs
retries
Workflows
  • 3262 Views
  • 1 replies
  • 1 kudos
Latest Reply
rmartinezdezaya
New Contributor II
  • 1 kudos

What about this? Any reply? Any alternative? I'm facing the same issue.

  • 1 kudos
Jennifer
by New Contributor III
  • 1905 Views
  • 6 replies
  • 0 kudos

Can external tables be created backed by current cloud files without ingesting files in Databricks?

Hi,We have huge amount of parquet files in s3 with the path pattern <bucket>/<customer>/yyyy/mm/dd/hh/.*.parquet.The question is can I create a external table in Unity Catalog from this external location without actually ingesting the files? Like wha...

  • 1905 Views
  • 6 replies
  • 0 kudos
Latest Reply
Data_Mavericks
New Contributor III
  • 0 kudos

 i think the issue is that you are trying to create a DELTA table in Unity catalog from an Parquet source without converting it to Delta format first.As Unity catalog will not allow delta table to be created in an non-empty location. Since you want t...

  • 0 kudos
5 More Replies
Rakesh007
by New Contributor II
  • 1862 Views
  • 3 replies
  • 0 kudos

Maven library installation issue on 15.4 LTS

recently i upgraded from 10.4LTS databricks runtime version to 15.4LTS version. while installing Maven library i was facing issue like :Library installation attempted on the driver node of cluster 0415-115331-dune977 and failed. Library resolution fa...

  • 1862 Views
  • 3 replies
  • 0 kudos
Latest Reply
User16611530679
Databricks Employee
  • 0 kudos

Hi @Rakesh007, Good Day!This seems to be a compatibility issue with the Apache Spark version, as the DBR 15.4LTS supports 3.5.0. Please try installing the below version and let us know how it goes?Version: com.crealytics:spark-excel_2.12:3.5.0_0.20.3...

  • 0 kudos
2 More Replies
Brad
by Contributor II
  • 7259 Views
  • 2 replies
  • 0 kudos

Why "rror: Invalid access to Org: xxx"

Hi team, I installed Databricks CLI, and run "databricks auth login --profile xxx" successfully. I can also connect from vscode to Databricks. "databricks clusters list -p xxx" also works. But when I tried to rundatabricks bundle validateI got"Error:...

  • 7259 Views
  • 2 replies
  • 0 kudos
Latest Reply
swhite
New Contributor II
  • 0 kudos

I just ran into this issue (in Azure Databricks) and found that it was caused by an incorrect `host` value specified in my databricks.yml file:targets: dev: default: true mode: production workspace: host: https://adb-<workspace-id...

  • 0 kudos
1 More Replies
afisl
by New Contributor II
  • 15628 Views
  • 8 replies
  • 5 kudos

Resolved! Apply unitycatalog tags programmatically

Hello,I'm interested in the "Tags" feature of columns/schemas/tables of the UnityCatalog (described here: https://learn.microsoft.com/en-us/azure/databricks/data-governance/unity-catalog/tags)I've been able to play with them by hand and would now lik...

Data Engineering
tags
unitycatalog
  • 15628 Views
  • 8 replies
  • 5 kudos
Latest Reply
Jiri_Koutny
New Contributor III
  • 5 kudos

Hi, running ALTER TABLE SET TAGS works on views too!

  • 5 kudos
7 More Replies
Sadam97
by New Contributor III
  • 887 Views
  • 3 replies
  • 0 kudos

GCE cluster chokes the secret api server.

Hi,We upgraded the GKE cluster to GCE cluster as per the databricks documentation. It works fine with one or two notebooks in job. Our production job has more than 40 notebooks and each notebook access the secret api and seems like secret api server ...

  • 887 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @Sadam97, This looks to be a known issue. I will share more details soon. There is a known issue with NAT and GKE and we’ll follow up offline

  • 0 kudos
2 More Replies
DataGeek_JT
by New Contributor II
  • 4014 Views
  • 4 replies
  • 4 kudos

Is it possible to use Liquid Clustering on Delta Live Tables / Materialised Views?

Is it possible to use Liquid Clustering on Delta Live Tables? If it is available what is the Python syntax for adding liquid clustering to a Delta Live Table / Materialised view please? 

  • 4014 Views
  • 4 replies
  • 4 kudos
Latest Reply
surajitDE
New Contributor III
  • 4 kudos

@Dlt.table(name=table_name,comment="just_testing",table_properties={"quality": "gold","mergeSchema": "true"},cluster_by=["test_id", "find_date"] # Optimizes for queries filtering on these columns)def testing_table():return create_testing_table(df_fin...

  • 4 kudos
3 More Replies
IliaSinev
by New Contributor II
  • 1179 Views
  • 2 replies
  • 0 kudos

Access mode for pool compute

Is there a way to set Access Mode: Shared to pool instances similar to All Purpose or Job clusters?We are getting an error reading from a table with a masking set up on a column:Failed to acquire a SAS token for list on /schema1/table1/_delta_log due...

  • 1179 Views
  • 2 replies
  • 0 kudos
Latest Reply
IliaSinev
New Contributor II
  • 0 kudos

Hi @Brahmareddy, thanks for reply. It seems that a higher Runtime version could help: https://learn.microsoft.com/en-us/azure/databricks/compute/access-mode-limitations#fine-grained-access-control-limitations-for-unity-catalog-dedicated-access-mode I...

  • 0 kudos
1 More Replies
chris_y_1e
by New Contributor II
  • 4118 Views
  • 5 replies
  • 0 kudos

Self-joins are blocked on remote tables

In our production databricks workflow, we have been getting this error since yesterday in one of the steps:org.apache.spark.SparkException: Self-joins are blocked on remote tablesWe haven't changed our workflow or made any configurations for the data...

  • 4118 Views
  • 5 replies
  • 0 kudos
Latest Reply
chris_y_1e
New Contributor II
  • 0 kudos

@TomRenish Yeah, we fixed it by changing it to use a shared compute. It is called "USER_ISOLATION" in the `job.yaml` file:data_security_mode: USER_ISOLATION

  • 0 kudos
4 More Replies
Upendra_Dwivedi
by Contributor
  • 783 Views
  • 1 replies
  • 0 kudos

Databricks-Sql-Connector

Hi,i am connecting with databricks sql_warehouse using vs_code and i am running following command: import osfrom databricks import sqlhost = 'adb-xxxxxxxxxxx.xx.azuredatabricks.net'http_path = '/sql/1.0/warehouses/xxxxxxxxxxxxxx'access_token = 'dapib...

  • 783 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16502773013
Databricks Employee
  • 0 kudos

Hello @Upendra_Dwivedi , This is potentially a missing package in your local Python setup, kindly can you check troubleshooting steps here and let me know  In the alternative this didn't work please share the output of the following commands: python ...

  • 0 kudos
Abser786
by New Contributor II
  • 1346 Views
  • 1 replies
  • 0 kudos

enable dynamic resource allocation on job cluster

I have a databricks job having two task those will run each alone or both parallel (will be controlled by if conditional task). When it runs parallel, one task is running for long time, but the same task finish quick when it runs alone. particularly ...

  • 1346 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16502773013
Databricks Employee
  • 0 kudos

Hello @Abser786, There is a difference between Dynamic Resource Allocation and the Scheduler policy Dynamic Resource Allocation means getting more compute as needed if current compute is totally consumed, this can be achieved by autoscaling feature/c...

  • 0 kudos
filipniziol
by Esteemed Contributor
  • 5419 Views
  • 3 replies
  • 0 kudos

Any known issue with interactive Shared Cluster Driver Memory Cleanup

I am experiencing memory leaks on a Standard (formerly shared) interactive cluster: 1. We run jobs regularly on the cluster2. After each job completes, driver memory usage continues to increase, suggesting resources aren't fully released3. Eventually...

  • 5419 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hello Team, I'll check internally if any known issue reported.

  • 0 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels