cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

rajalakshmi9394
by New Contributor II
  • 3359 Views
  • 3 replies
  • 4 kudos

Resolved! QUERY_RESULT_ROWS without first row as trigger in SQL Alerts of databricks

Hi Team - In azure databricks sql alerts, I was able to use QUERY_RESULT_ROWS only if I'm selecting the trigger as first row. Is there a possibility to get count of number of rows and also the query result (both rows and columns to display the data a...

  • 3359 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Rajalakshmi Amara​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell u...

  • 4 kudos
2 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 1466 Views
  • 1 replies
  • 6 kudos

Cluster policies now support limiting the max number of clusters per user can create Policy permissions allow you to set a max number of clusters per ...

Cluster policies now support limiting the max number of clusters per user can createPolicy permissions allow you to set a max number of clusters per user. This determines how many clusters a user can create using that policy. If the user exceeds the ...

image
  • 1466 Views
  • 1 replies
  • 6 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 6 kudos

Thank you for sharing

  • 6 kudos
Ajay-Pandey
by Esteemed Contributor III
  • 2454 Views
  • 1 replies
  • 6 kudos

Variable explorer in Databricks With Databricks Runtime 12.1 and above, you can directly observe current Python variables in the notebook UI.To open t...

Variable explorer in DatabricksWith Databricks Runtime 12.1 and above, you can directly observe current Python variables in the notebook UI.To open the variable explorer, click in the right sidebar. The variable explorer opens, showing the value and ...

image Untitled
  • 2454 Views
  • 1 replies
  • 6 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 6 kudos

Thank you for sharing

  • 6 kudos
mickniz
by Contributor
  • 2035 Views
  • 1 replies
  • 1 kudos

ErrorClass=DAC_DOES_NOT_EXIST]

While creating external table in unity enabled catalog i am geting below error:Data access configuration for metastore does not exist. I can see data access is there.Can anyone let me know if I am misisng here anything

  • 2035 Views
  • 1 replies
  • 1 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 1 kudos

could you share the full error stack trace? how do you create the table? please provide more details, so we can help you to find a solution

  • 1 kudos
chanansh
by Contributor
  • 1501 Views
  • 1 replies
  • 0 kudos

QueryExecutionListener cannot be found in pyspark

According to the documentation you can monitor a spark structure stream job using QueryExecutionListener. However I cannot find it. https://docs.databricks.com/structured-streaming/stream-monitoring.html#language-python

  • 1501 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Which DBR version are you using? also, can you share some code snippet on how you are using the QueryExecutionListener?

  • 0 kudos
diguid
by New Contributor III
  • 3312 Views
  • 1 replies
  • 13 kudos

Using foreachBatch within Delta Live Tables framework

Hey there!​I was wondering if there's any way of declaring a delta live table where we use foreachBatch to process the output of a streaming query.​Here's a simplification of my code:​def join_data(df_1, df_2): df_joined = ( df_1 ...

  • 3312 Views
  • 1 replies
  • 13 kudos
Latest Reply
JJ_LVS1
New Contributor III
  • 13 kudos

I was just going through this as well and require micro-batch operations. Can't see how this will work with DLT right now so I've switched back to structured streaming. I hope they add this functionality otherwise it limits DLT to more basic strea...

  • 13 kudos
ackerman_chris
by New Contributor III
  • 3008 Views
  • 0 replies
  • 0 kudos

Azure Devops Git sync failed in Azure Databricks

Hello,I am currently attempting to setup a Git Repo within Azure Devops to use on my Azure Databricks Workspace environment for various notebooks. I went through the process of creating a Personal Access Token (PAT) on Devops, and have inputted the t...

  • 3008 Views
  • 0 replies
  • 0 kudos
Mado
by Valued Contributor II
  • 12434 Views
  • 4 replies
  • 3 kudos

Resolved! Databricks Audit Logs, What is "dataSourceId"?

Hi,I want to access the Databricks Audit Logs to check user activity.I created a Databricks workspace on the premium pricing tier.I configured Audit logs to be sent to Azure Diagnostic log delivery. What I got in the "Log Analytics Workspace":  I hav...

image image image
  • 12434 Views
  • 4 replies
  • 3 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 3 kudos

The data_source_id field specifies the id of the SQL warehouse against which this query will run. You can use the Data Sources API to see a complete list of available SQL warehouses.

  • 3 kudos
3 More Replies
Jkb
by New Contributor II
  • 3717 Views
  • 0 replies
  • 1 kudos

Workflow triggered by CLI shown "manually" triggered

We trigger different Worflows by ADF.These workflows will be shown triggered "manually".Is this behaviour intentional? At least for users, this is confusing.ADF-triggered Run: Databricks-Workflows: 

ADF_Monitor manually1 manually2
  • 3717 Views
  • 0 replies
  • 1 kudos
Twilight
by New Contributor III
  • 3315 Views
  • 2 replies
  • 0 kudos

How to make backreferences in regexp_replace repl string work correctly in Databricks SQL?

Both of these work in Spark SQL:regexp_replace('1234567890abc', '^(?<one>\\w)(?<two>\\w)(?<three>\\w)', '$1') regexp_replace('1234567890abc', '^(?<one>\\w)(?<two>\\w)(?<three>\\w)', '${one}')However, neither work in Databricks SQL. I found that this ...

  • 3315 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16764241763
Honored Contributor
  • 0 kudos

Hello @Stephen Wilcoxon​ Could you please share the expected output in Spark SQL?

  • 0 kudos
1 More Replies
Ria
by New Contributor
  • 1449 Views
  • 1 replies
  • 1 kudos

py4j.security.Py4JSecurityException

Getting this error while loading data with autoloader. Although table access control is already disabled still getting this error."py4j.security.Py4JSecurityException: Method public org.apache.spark.sql.streaming.DataStreamReader org.apache.spark.sql...

image
  • 1449 Views
  • 1 replies
  • 1 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 1 kudos

Hi,Are you using a High concurrency cluster? which DBR version are you running?

  • 1 kudos
lurban
by New Contributor
  • 1846 Views
  • 1 replies
  • 0 kudos

Delta Live Tables Development Mode Resets Cluster On Each Trigger

I believe this is a bug identified, but in the last few days, each time I trigger a test Delta Live Tables run in Development mode, the associated cluster will take 5-7 minutes to spin up each time. The cluster does stay on as anticipated in the comp...

  • 1846 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Hi,Can you share your cluster JSON settings? it will help us to undertand the settings and VMs you are using.

  • 0 kudos
manasa
by Contributor
  • 4900 Views
  • 3 replies
  • 1 kudos

Need help to insert huge data into cosmos db from azure data lake storage using databricks

I am trying to insert 6GB of data into cosmos db using OLTP ConnectorContainer RU's:40000Cluster Config:cfg = { "spark.cosmos.accountEndpoint" : cosmosdbendpoint, "spark.cosmos.accountKey" : cosmosdbmasterkey, "spark.cosmos.database" : cosmosd...

image.png
  • 4900 Views
  • 3 replies
  • 1 kudos
Latest Reply
ImAbhishekTomar
New Contributor III
  • 1 kudos

Did anyone find solution for this, I’m also using similar clutter and RAU and data ingestion taking lot of time….?

  • 1 kudos
2 More Replies
youssefmrini
by Databricks Employee
  • 1339 Views
  • 1 replies
  • 2 kudos
  • 1339 Views
  • 1 replies
  • 2 kudos
Latest Reply
Sivaprasad1
Valued Contributor II
  • 2 kudos

@Youssef Mrini​ : Please have a look at below link which gives the databricks resource limitshttps://docs.databricks.com/resources/limits.html

  • 2 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels