cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

aonurdemir
by Contributor
  • 1109 Views
  • 1 replies
  • 1 kudos

Resolved! Is there a cluster option for dashboards?

Hi everyone,I do not want to use 4 DBU/h XS warehouse since I have very tiny data on the new startup. I want to create a minimal cluster and run it as the underlying SQL engine for my dashboard.Thanks.

  • 1109 Views
  • 1 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Unfortunately no, as dashboards are part of the SQL service on the platform they are designed to work with SQL warehouses only, you can create Notebook dashboards that will be able to work with regular clusters but functionalities will be limited in ...

  • 1 kudos
h2p5cq8
by New Contributor III
  • 1921 Views
  • 5 replies
  • 1 kudos

Resolved! Databricks workflow with sequenced tasks

I have a continuous workflow. It is continuous because I would like it to run every minute and if it has stuff to do the first task will take several minutes. As I understand, continuous workflows won't requeue while a job is currently running, where...

  • 1921 Views
  • 5 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @h2p5cq8, No problem! and you can have the queue option disabled to stop it. Go to the Advanced settings in the Job details side panel and toggle off the Queue option to prevent jobs from being queued

  • 1 kudos
4 More Replies
vicky403
by New Contributor
  • 1261 Views
  • 1 replies
  • 0 kudos

How Development Target works for multiple users?

Hi, I'm using the Databricks asset bundle to deploy my job to Azure Databricks.I want to configure the Databricks bundle so that when anyone runs the Azure pipeline, a job is created under their name in the format dev_username_job.Using a personal ac...

  • 1261 Views
  • 1 replies
  • 0 kudos
Latest Reply
zuzsad
New Contributor II
  • 0 kudos

Were you able to solve this?

  • 0 kudos
ahsan_aj
by Contributor II
  • 7426 Views
  • 5 replies
  • 0 kudos

Azure Databricks Enterprise Application User Impersonation Token Group Claims Issue

Hi all, I am using the Azure Databricks Microsoft Managed Enterprise Application scope (2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/user_impersonation) to fetch an access token on behalf of a user. The authentication process is successful; however, the acce...

  • 7426 Views
  • 5 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @ahsan_aj, You can modify your token request by adding a claims parameter     const claimsRequest = {         "access_token": {             "groups": null         } https://learn.microsoft.com/en-us/security/zero-trust/develop/configure-tokens-gro...

  • 0 kudos
4 More Replies
cheerwthraj
by New Contributor
  • 2205 Views
  • 1 replies
  • 0 kudos

Best practices for tableau to connect to Databricks

Having problem in connecting to Databrikcs with service principal from tableau . Wanted to how how tableau extracts refreshing connecting to databricks , is it via individual Oauth or service principal

  • 2205 Views
  • 1 replies
  • 0 kudos
Latest Reply
saikumar246
Databricks Employee
  • 0 kudos

Hi @cheerwthraj,  To connect Tableau to Databricks and refresh extracts, you can use either OAuth or service principal authentication. For best practices, please refer to the below link, https://docs.databricks.com/en/partners/bi/tableau.html#best-pr...

  • 0 kudos
AbhishekNegi
by New Contributor
  • 1762 Views
  • 1 replies
  • 1 kudos

New Cluster 90% memory already consumed

Hi, seeing this on all new clusters (single or multi-node) I am creating. As soon as the metrics start showing up, the memory consumption shows 90% already consumed between Used and Cached (something like below). This is the case with higher or lower...

AbhishekNegi_0-1725911074420.png AbhishekNegi_1-1725911119189.png
  • 1762 Views
  • 1 replies
  • 1 kudos
Latest Reply
saikumar246
Databricks Employee
  • 1 kudos

Hi @AbhishekNegi I understand your concern. The reason for you to see memory consumption before initiating any task and regarding the comment taking time to execute. This is how Spark internally works. The memory consumption observed in a Spark clust...

  • 1 kudos
RobsonNLPT
by Contributor III
  • 8519 Views
  • 15 replies
  • 3 kudos

Delta Live Tables Permissions

Hi allI'm the owner of delta live tables pipelines but I don't see the option described on documentation to grant permissions for different users. The options available are "settings" and "delete"In the sidebar, click Delta Live Tables.Select the nam...

  • 8519 Views
  • 15 replies
  • 3 kudos
Latest Reply
Walter_C
Databricks Employee
  • 3 kudos

Ok might be that the version of the workspaces could be different and the new patch will be implemented soon.

  • 3 kudos
14 More Replies
Nandhini_Kumar
by New Contributor III
  • 3860 Views
  • 1 replies
  • 0 kudos

How the Scale up process done in the databricks cluster?

For my AWS databricks cluster, i configured shared computer with 1min worker node and 3 max worker node, initailly only one worker node and driver node instance is created in the AWS console. Is there any rule set by databricks for scale up the next ...

  • 3860 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Databricks uses autoscaling to manage the number of worker nodes in a cluster based on the workload. When you configure a cluster with a minimum and maximum number of worker nodes, Databricks automatically adjusts the number of workers within this ra...

  • 0 kudos
gb
by New Contributor
  • 3269 Views
  • 2 replies
  • 0 kudos

Write stream to Kafka topic with DLT

Hi,Is it possible to write stream to Kafka topic with Delta Live Table?I would like to do something like this:@dlt.view(name="kafka_pub",comment="Publish to kafka")def kafka_pub():return (dlt.readStream("source_table").selectExpr("to_json (struct (*)...

  • 3269 Views
  • 2 replies
  • 0 kudos
Latest Reply
dmytro
New Contributor III
  • 0 kudos

@shashas , is a Kafka sink now available? Where can we find information on setting it up, if yes?

  • 0 kudos
1 More Replies
tt_mmo
by New Contributor II
  • 1748 Views
  • 1 replies
  • 0 kudos

SQL table convert to R dataframe

I have a table with ~6 million rows. I am attempting to convert this from a sql table on my catalog to an R dataframe to use the tableone package. I separate my table into 3 tables each containing about 2 million rows then ran it through tbl() and as...

  • 1748 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

To handle a large SQL table (~6 million rows) and convert it into an R dataframe without splitting it into smaller subsets, you can use more efficient strategies and tools that are optimized for large datasets. Here are some recommendations: 1. Use `...

  • 0 kudos
hari-prasad
by Valued Contributor II
  • 932 Views
  • 2 replies
  • 1 kudos

How to merge stats from my customer-academy to partner-academy Databricks

Hi,I have been using my customer-academy account from long time, and I recently for a partner-academy account to which I want to sync my stats.It is possible?

  • 932 Views
  • 2 replies
  • 1 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 1 kudos

I have mailed to training-support, but no response yet. Just received confirmation email.

  • 1 kudos
1 More Replies
hari-prasad
by Valued Contributor II
  • 1152 Views
  • 1 replies
  • 1 kudos

When Databricks Enabling Support for Rust and Go in Notebook

Now #Rust and #GoLang are trending for their efficiency and speed. When can databricks enthusiasts can leverage the power of Rust and Golang in Databricks notebook to create data/ETL pipelines.  Any plan at #databricks ? 

  • 1152 Views
  • 1 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Rust is an allowed language at Databricks if you must avoid a JVM process. I can see that the teams are working to provide additional support for Rust which might be available in the near future.

  • 1 kudos
shwetamagar
by New Contributor II
  • 1807 Views
  • 1 replies
  • 1 kudos

UC migration : Mount Points in Unity Catalog

Hi All,In my existing notebooks we have used mount points url as /mnt/ and we have notebooks where we have used the above url to fetch the data/file from the container. Now as we are upgrading to unity catalog these url will no longer be supporting a...

  • 1807 Views
  • 1 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Unfortunately no, mount points are no longer supported with UC, so you will need to modify the URL manually on your notebooks.

  • 1 kudos
Thekenal
by New Contributor II
  • 501 Views
  • 1 replies
  • 0 kudos
  • 501 Views
  • 1 replies
  • 0 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 0 kudos

Hi @Thekenal , You can following link to first connect to Azure SQL server from databricks https://learn.microsoft.com/en-us/azure/databricks/connect/external-systems/sql-server.Then follow dashboard creation within Databricks https://docs.databricks...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels