cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

alluarjun
by New Contributor
  • 713 Views
  • 2 replies
  • 0 kudos

databricks asset bundle error-terraform.exe": file does not exist

Hi,I am getting below error while I am deploying databricks bundle using azure devops release  2024-07-07T03:55:51.1199594Z Error: terraform init: exec: "xxxx\\.databricks\\bundle\\dev\\terraform\\xxxx\\.databricks\\bundle\\dev\\bin\\terraform.exe": ...

  • 713 Views
  • 2 replies
  • 0 kudos
Latest Reply
BNG_FGA
New Contributor II
  • 0 kudos

Hi Databricks team, any update on this issue? I'm experiencing the same issue. Our development VDI is isolated so databricks cli cannot download then necessary Terraform files. We are therefore forced to download them, have them installed manually. W...

  • 0 kudos
1 More Replies
Arch_dbxlearner
by New Contributor III
  • 2994 Views
  • 5 replies
  • 1 kudos

How to get data from Splunk on daily basis?

I am finding the ways to get the data to Databricks from Splunk (similar to other data sources like S3, Kafka, etc.,). I have received a suggestion to use the Databricks add-on to get/put the data from/to Splunk. To pull the data from Databricks to S...

Community Platform Discussions
Databricks add-on
Splunk
  • 2994 Views
  • 5 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

@Arch_dbxlearner  - could you please follow the post for more details.  https://community.databricks.com/t5/data-engineering/does-databricks-integrate-with-splunk-what-are-some-ways-to-send/td-p/22048  

  • 1 kudos
4 More Replies
ksenija
by Contributor
  • 1248 Views
  • 1 replies
  • 0 kudos

Foreign table to delta streaming table

I want to copy a table from a foreign catalog as my streaming table. This is the code I used but I am getting error: Table table_name does not support either micro-batch or continuous scan.; spark.readStream                .table(table_name)         ...

  • 1248 Views
  • 1 replies
  • 0 kudos
Latest Reply
MLE123
New Contributor II
  • 0 kudos

Bumping this thread because I have the same question and this is still the first result on Google (c. October 2024). Many thanks for anyone who is able to assist!

  • 0 kudos
dipali_globant
by New Contributor II
  • 233 Views
  • 1 replies
  • 0 kudos

duplicate data published in kafka offset

we have 25k data which are publishing by batch of 5k.we are numbering the records based on row_number window function and creating batch using this.we have observed that some records like 10-20 records are getting published duplicated in 2 offset. ca...

  • 233 Views
  • 1 replies
  • 0 kudos
Latest Reply
agallardrivilla
New Contributor II
  • 0 kudos

Hi @dipali_globant,duplicate data in Kafka can arise in a batch processing scenario for a few reasons here’s an example of ensuring unique and consistent row numbering: from pyspark.sql import Window from pyspark.sql.functions import row_number wind...

  • 0 kudos
bharathjs
by New Contributor II
  • 526 Views
  • 4 replies
  • 1 kudos

Alter table to add/update multiple column comments

I was wondering if there's a way to alter table and add/update comments for multiple columns at once using SQL or API calls. For instance -  ALTER TABLE <table_name> CHANGE COLUMN <col1> COMMENT '<comment1>',CHANGE COLUMN <col2> COMMENT 'comment2' ; ...

  • 526 Views
  • 4 replies
  • 1 kudos
Latest Reply
filipniziol
Contributor
  • 1 kudos

Hi, assuming you have a dictionary with column name to comment mapping, you can do this using pyspark like this:columns_comments = { "col1": "comment1", "col2": "comment2", # Add all your columns and comments here } for col, comment in c...

  • 1 kudos
3 More Replies
prabbalagilead
by New Contributor II
  • 243 Views
  • 1 replies
  • 0 kudos

How do i find total number of input tokens to genie ?

I am calculating usage analytics for my work, where they use genie.I have given the following for my genie as definition:(1) instructions (2) example SQL queries (3) Within catalog, i went to those relevant table schema and added comments, descriptio...

  • 243 Views
  • 1 replies
  • 0 kudos
Latest Reply
prabbalagilead
New Contributor II
  • 0 kudos

Or is there any set of tables and functions to determine the number of input and output tokens per query?

  • 0 kudos
JCamiloCS
by New Contributor
  • 6223 Views
  • 1 replies
  • 0 kudos

how to use R in databricks

Hello everyone.I am a new user of databricks, they implemented it in the company where I work. I am a business analyst and I know something about R, not much either, when I saw that databricks could use R I was very excited because I thought that the...

  • 6223 Views
  • 1 replies
  • 0 kudos
Latest Reply
karine
New Contributor II
  • 0 kudos

Hello JCamiloCS, did you get out of it?We have had the same question, so I just wonder if you find any good guidance?

  • 0 kudos
fridthoy
by New Contributor II
  • 474 Views
  • 7 replies
  • 0 kudos

Cluster logs folder

Hi,  I can't see to find the cluster_logs folder, anyone that can help me find where the cluster logs are stored? Best regards

fridthoy_0-1729764812475.png
  • 474 Views
  • 7 replies
  • 0 kudos
Latest Reply
fiff
New Contributor II
  • 0 kudos

Thank you for the help! I have enabled predictive optimization for unity catalog, thinking it would automatically preform VACCUM on the tables i have in my delta lake. With that in mind, I assumed VACCUM wouldn't require further attention.Would it be...

  • 0 kudos
6 More Replies
sujan1
by New Contributor
  • 2320 Views
  • 1 replies
  • 0 kudos

requirements.txt with cluster libraries

Cluster libraries are supported from version 15.0 - Databricks Runtime 15.0 | Databricks on AWS.How can I specify requirements.txt file path in the libraries in a job cluster in my workflow? Can I use relative path? Is it relative from the root of th...

  • 2320 Views
  • 1 replies
  • 0 kudos
Latest Reply
462098
New Contributor II
  • 0 kudos

To use the new "requirements.txt" feature in your cluster do the following:Change your cluster's "Databricks Runtime Version" to 15.0 or greater (example: "15.4 LTS ML (includes Apache Spark 3.5.0, Scala 2.12)"). Navigate to the cluster's: "Libraries...

  • 0 kudos
ChristopherQ1
by New Contributor
  • 848 Views
  • 1 replies
  • 0 kudos

Can we share Delta table data with Salesforce using OData?

Hello!I'm seeking recommendations for streaming on-demand data from Databricks Delta tables to Salesforce. Is OData a viable choice?Thanks.

  • 848 Views
  • 1 replies
  • 0 kudos
Latest Reply
matthew_m
Databricks Employee
  • 0 kudos

Hi @ChristopherQ1, Salesforce has released a zero-copy connection that relies on the SQL Warehouse to ingest data when needed. I suggest you consider that instead of OData.   Matthew

  • 0 kudos
nickneoners
by New Contributor II
  • 1298 Views
  • 5 replies
  • 1 kudos

Variables in databricks.yml "include:" - Asset Bundles

HI,We've got an app that we deploy to multiple customers workspaces. We're looking to transition to asset bundles. We would like to structure our resources like:  -src/ -resources/ |-- customer_1/ |-- job_1 |-- job_2 |-- customer_2/ |-- job_...

  • 1298 Views
  • 5 replies
  • 1 kudos
Latest Reply
Breno_Ribeiro
New Contributor II
  • 1 kudos

I have a similar use case. We have two different host for databricks, EU and NA. In some case we need to deploy a similar job in both hosts. To fix that, here how I did:- Into job folder I created different job files, each one for one host. In aditio...

  • 1 kudos
4 More Replies
Sourav-Kundu
by New Contributor III
  • 309 Views
  • 1 replies
  • 0 kudos

Databricks Apps is now available in Public Preview

Databricks Apps is a new way to build and deploy internal data and AI applications is now available in Public Preview.Databricks Apps let developers build native apps using frameworks like Dash, Shiny and Streamlit, enabling data applications for non...

  • 309 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ranjith_Ajit
New Contributor II
  • 0 kudos

How to enable this preview in account console?

  • 0 kudos
trevormccormick
by New Contributor III
  • 308 Views
  • 3 replies
  • 0 kudos

Embed Dashboard - GraphQL Operation Not Authentic

I have added a domain to my list of approved domains for embedding dashboards from my Databricks instance. This domain hosts my Docusaurus site. When the page with the embedded dashboard loads, it makes some network requests to Databricks that are fa...

  • 308 Views
  • 3 replies
  • 0 kudos
Latest Reply
trevormccormick
New Contributor III
  • 0 kudos

is it possible that this is happening because the website is not HTTPS?

  • 0 kudos
2 More Replies
RobsonNLPT
by Contributor
  • 12270 Views
  • 4 replies
  • 3 kudos

Permissions on Unity Catalog Table Constraints

Hi all.I've used new options to add constraints to UC tablesEven granting permissions to an user (ALL PRIVILEGES) on particular schema we have errors when trying to add PKs. The message doesn't make sense (PERMISSION_DENIED: User is not an owner of T...

  • 12270 Views
  • 4 replies
  • 3 kudos
Latest Reply
dmart
New Contributor III
  • 3 kudos

So how does one grant these permissions to non-owners?

  • 3 kudos
3 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors