- 21393 Views
- 9 replies
- 9 kudos
I have a daily running notebook that occasionally fails with the error:"Run result unavailable: job failed with error message Unexpected failure while waiting for the cluster Some((xxxxxxxxxxxxxxx) )to be readySome(: Cluster xxxxxxxxxxxxxxxx is in un...
- 21393 Views
- 9 replies
- 9 kudos
Latest Reply
Cluster 'xxxxxxx' was terminated. Reason: WORKER_SETUP_FAILURE (SERVICE_FAULT). Parameters: databricks_error_message:DBFS Daemomn is not reachable., gcp_error_message:Unable to reach the colocated DBFS Daemon.Can Anyone help me how can we resolve thi...
8 More Replies
by
tanjil
• New Contributor III
- 16633 Views
- 9 replies
- 6 kudos
Hello, I am trying to download lists from SharePoint into a pandas dataframe. However I cannot get any information successfully. I have attempted many solution mentioned in stackoverflow. Below is one of those attempts: # https://pypi.org/project/sha...
- 16633 Views
- 9 replies
- 6 kudos
Latest Reply
The error "<urlopen error [Errno -2] Name or service not known>" suggests that there's an issue with the server URL or network connectivity. Double-check the server URL to ensure it's correct and accessible. Also, verify that your network connection ...
8 More Replies
- 3361 Views
- 5 replies
- 0 kudos
I get the following error message on the attempt to use SQL Warehouse (Serverless) compute with Materialized Views (a simple interaction, e.g. DML, UI sample lookup). The MVs are created off the back of Federated Tables (Postgresql), MVs are created ...
- 3361 Views
- 5 replies
- 0 kudos
Latest Reply
Hey,To clarify, as I think I'm potentially hitting Databricks unintended "functionality".Materialised Views are managed by DLT pipeline, which was deployed with DABs off CI/CD pipeline,DLT Pipeline runs a notebook with Python code creating MVs dynami...
4 More Replies
- 2211 Views
- 2 replies
- 0 kudos
I'm trying to create an ETL framework on delta live tables and basically use the same pipeline for all the transformation from bronze to silver to gold. This works absolutely fine when I hard code the tables and the SQL transformations as an array wi...
- 2211 Views
- 2 replies
- 0 kudos
- 725 Views
- 1 replies
- 0 kudos
Hey everyone,I have been facing a weird error when i upgrade to use Unity Catalog.org.apache.spark.SparkRuntimeException: [UDF_ERROR.ENV_LOST] Execution of function line_string_linear_interp(geometry#1432) failed - the execution environment was lost ...
- 725 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @calvinchan_iot, How are you doing today?As per my understanding, It sounds like the error may be due to environment instability when running the UDF after enabling Unity Catalog. The [UDF_ERROR.ENV_LOST] error often points to the UDF execution en...
- 567 Views
- 2 replies
- 0 kudos
As part of the data governance team, we're trying to enforce table-level tagging when users create tables in a Databricks environment where metadata is managed by AWS Glue Catalog (non-Unity Catalog). Is there a way to require tagging at table creati...
- 567 Views
- 2 replies
- 0 kudos
Latest Reply
You can use lakeFS pre-merge hooks to force this. Works great with this stack -> https://lakefs.io/blog/lakefs-hooks/
1 More Replies
- 431 Views
- 1 replies
- 0 kudos
Hello everyone,I’m currently exploring the best setup for my data engineering tasks in Databricks and have been considering the benefits of using an ultrawide curved monitor compared to a standard dual-monitor setup.I’d love to hear from the communit...
- 431 Views
- 1 replies
- 0 kudos
Latest Reply
Here’s a clearer version:I don’t use a curved monitor; instead, I have two 35" monitors, which work perfectly for my Databricks work. I chose two large monitors over one extra-large one because I frequently share screens, and it’s easier to share an ...
by
Jana
• New Contributor III
- 8743 Views
- 9 replies
- 4 kudos
I was creating delta table from ADLS json input file. but the job was running long while creating delta table from json. Below is my cluster configuration. Is the issue related to cluster config ? Do I need to upgrade the cluster config ?The cluster ...
- 8743 Views
- 9 replies
- 4 kudos
Latest Reply
with multiline = true, the json is read as a whole and processed as such.I'd try with a beefier cluster.
8 More Replies
by
1npo
• New Contributor II
- 585 Views
- 2 replies
- 2 kudos
Hello,I have the interface theme set to "Prefer dark" in Databricks. I just got a popup in the Workflow page while reviewing a job run, that said something like "A new version of this app is available, click to refresh". I clicked refresh, and now my...
- 585 Views
- 2 replies
- 2 kudos
Latest Reply
I just got another "New version of this app is available" popup, and clicking "Refresh" fixedthe dark mode issue. Thanks for the quick response to whichever engineer at Databricks just pushed a hotfix
1 More Replies
by
merca
• Valued Contributor II
- 10092 Views
- 12 replies
- 7 kudos
Please include in documentation an example how to incorporate the `QUERY_RESULT_ROWS` variable in the custom template.
- 10092 Views
- 12 replies
- 7 kudos
Latest Reply
Databricks confirmed this was an issue on their end and it should be resolved now. It is working for me.
11 More Replies
- 4544 Views
- 3 replies
- 1 kudos
I have a sample set of power bi(.pbix) reports with all dropdowns, tables, filters etc. Now I would like to migrate this reports to data bricks. whatever visuals are created in power bi, I would like to create same in data bricks from scratch. I wou...
- 4544 Views
- 3 replies
- 1 kudos
Latest Reply
We must ensure you are sure that the Databricks cluster is operational. These are the steps needed for integration between Azure Databricks into Power BI Desktop.1. Constructing the URL for the connectionConnect to the cluster, and click the Advanced...
2 More Replies
- 1090 Views
- 2 replies
- 0 kudos
Currently, it appears that the Dashboards functionality does not support linking a 'Multiple Values' widget to a query parameter, nor does it allow the creation of a line plot with multiple lines.We are developing a dashboard where users need to visu...
- 1090 Views
- 2 replies
- 0 kudos
Latest Reply
Hello,thank you for your reply.Notebooks are not really a workaround for me here but thank you for the walk through.I think this feature is very important so I hope this reaches your backlog somehow. As this is something a Grafana for example is able...
1 More Replies
- 473 Views
- 1 replies
- 0 kudos
Hi all,I was trying to deep clone one of the sample tables provided with a parametrized query:create table if not exists IDENTIFIER(:target_catalog || :target_schema || :table_name) DEEP CLONE IDENTIFIER('samples.tpch.' || :table_name)But Databricks ...
- 473 Views
- 1 replies
- 0 kudos
Latest Reply
Deep clone will also clone all the metadata (e.g. indexes, properties, history, etc.), while SELECT * will create a new, fresh Delta Table, with it's own history and properties.
- 689 Views
- 1 replies
- 0 kudos
Hi,I have requirement to setup the athena tables. We have a unity catalog setup in databricks workspace and I would like know is there any possibility that Athen can be point to unity catalog so that all the tables are available in athena.whenever we...
- 689 Views
- 1 replies
- 0 kudos
Latest Reply
Unfortunately, as of now, there isn't a direct, seamless integration between Unity Catalog and Athena to automatically synchronize table updates.However, here are a few potential approaches to achieve your desired outcome:1. AWS Glue Data Catalog:Man...
- 802 Views
- 1 replies
- 1 kudos
I built a script about 6 months ago to make our Delta Tables accessible in Redshift for another team, but it's a bit nasty...Generate a delta lake manifest each time the databricks delta table is updatedRecreate the redshift external table (incase th...
- 802 Views
- 1 replies
- 1 kudos
Latest Reply
Still searching for the same pain point...may be in marketplace to integrate Unity Catalog and Redshift