I have a databricks function that returns a table_nameCREATE OR REPLACE FUNCTION test_func() RETURNS string READS SQL DATA RETURN 'table_name'I want to select from the table that is returned by this function. How can I make it work in SQL, some...
I am trying to evaluate options for Monitoring and Alerting tools like New Relic, Datadog, Grafana with Databricks on Azure . No one supports when reached out to them. I would like to hear from the databricks team on the recommended tool / framework ...
I'd recommend this new tool we've been trying out. It's really helpful for monitoring and provides good insights on how Azure Databricks clusters, pools & jobs are doing – like if they're healthy or having issues. It brings everything together, makin...
Problem StatementCluster 1 (Shared Cluster) is not able to read the file location at "dbfs:/mnt/landingzone/landingzonecontainer/Inbound/" and hence we are not able to create an external table in a schema inside Enterprise Catalog.Cluster 2 (No Isola...
Hi @Meshynix,Can you provide the code snippet you execute to create your tables, this would give us a better insight for both use cases. Also can you provide the error that is being returned in the first use case. This would help a lot.
Do Databricks workflows support creating a workflow with a dynamic number of tasks?For example, let's say we have a DAG like this:T1 -> T2(1) -> T2(2) -> ..... -> T3 T2(n-1) -> T2(n)...
@FlexException Databricks API supports job creation and execution Task Parameters and Values in Databricks Workflows | by Ryan Chynoweth | MediumOne possibility is after running earlier job, process the output to create a dynamic number of tasks in s...
Hi,We have been planning to migrate the Synapse Data bricks activity executions from 'All-purpose cluster' to 'New job cluster' to reduce overall cost. We are using Standard_D3_v2 as cluster node type that has 4 CPU cores in total. The current quota ...
I am doing some automated testing; and would like ultimately to access per job/stage/task metrics as shown in the UI (e.g. spark UI -> sql dataframe) -> plan visualization in an automated way (API is ideal; but some ad-hoc metrics pipelines from loca...
Thanks for the response. This enables the event logs. But the event logs seem to be empty. Would you know where I can get the spark metrics as seen from the spark ui.
Hello,For the purposes of testing I'm interested in creating a new workspace with Unity Catalog enabled, and from there I'd like to access (external - S3) tables on an existing legacy hive metastore workspace (not UC enabled). The goal is for both wo...
@Kaniz From looking at the documentation none address my particular use case which I illustrated (2 workspaces on one account, 1 with UC and the other not). Was there a particular part on any of the docs you're suggesting can help here?
I used the same accessing method shown in https://community.databricks.com/t5/data-engineering/to-read-data-from-azure-storage/td-p/32230 but kept get the error below.org.apache.spark.SparkSecurityException: [INSUFFICIENT_PERMISSIONS] Insufficient pr...
Hi,you can find storage account firewall information by accessing resource in azure portal Please mind that if you are using Unity Catalog you should NOT mount Storage Account, you should rather use abstraction of Storage Creadentials and External Lo...
Hi all,I am trying the new "git folder" feature, with a repo that works fine from the "Repos". In the new folder location, my imports from my own repo don't work anymore. Anyone faced something similar?Thanks in advance for sharing your experience
Hi @Kibour,
Ensure that the Databricks Repos feature is enabled. Sometimes, this issue arises when Repos are not properly activated.Additionally, verify that Repos are allowed for DBR 8.4+ versions.Confirm that your folder structure adheres to the c...
HiI'm using Azure databricks 10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12). I'm trying to convert 2 columns from string data type to timestamp data type . My date columns are in below format2/18/2021 7:20:12 PMSo I wrote following commandfrom py...
Hi @Marinagomes,
Try Using try_to_timestamp: Instead of to_timestamp, consider using try_to_timestamp. It returns null for malformed expressions, which can help identify problematic rows.
When I was executing the code, I was getting this error:"Notebook detached×Exception when creating execution context: java.net.SocketTimeoutException: Connect Timeout"Can someone help me?
Hi I am trying to get query history data from my SQL warehouse. Following previous examples is not working. databricks_workspace_url = "xxx"token = "xxx"start_time = 1707091200end_time = 1707174000api_endpoint = f"{databricks_workspace_url}/api/2.0/s...
@Cheryl - you can use query_start_time=2023-01-01T00:00:00Z as a parameter to filter for the time frame. available filter criteria are given below - https://docs.databricks.com/api/workspace/queryhistory/list#filter_by-query_start_time_range
Hello, I have a question.Context :I have a Unity Catalog organized with three schemas (bronze, silver and gold). Logically, I would like to create tables in each schemas.I tried to organize my pipelines on the layers, which mean that I would like to ...
Hey @AxelBrsn,Unfortunately this is a limitation with DLT as far as my experience goes. You should organize the pipelines in a way that they encompass the full Bronze/Silver/Gold flow, since you don't have control over the schema if you want to make ...
Hello everybody.Whenever I am trying to run a simple cell I receive the following error message now:Notebook detached. Exception when creating expectation context: java.net.SocketTimeoutException: Connect Timeout.After that error message the cluster ...
Hello,I'm having a hard adding secrets since the change from the legacy CLI. Executing "databricks configure" as referenced in the personal access token section of this document does nothing. I have verified that the CLI is installed as v0.216.0.