cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

alluarjun
by New Contributor
  • 2638 Views
  • 6 replies
  • 0 kudos

databricks asset bundle error-terraform.exe": file does not exist

Hi,I am getting below error while I am deploying databricks bundle using azure devops release  2024-07-07T03:55:51.1199594Z Error: terraform init: exec: "xxxx\\.databricks\\bundle\\dev\\terraform\\xxxx\\.databricks\\bundle\\dev\\bin\\terraform.exe": ...

  • 2638 Views
  • 6 replies
  • 0 kudos
Latest Reply
BNG_FGA
New Contributor II
  • 0 kudos

Using Git Bash for bash execution and to setup variables we did this by going to control panel -> system -> environment varibales

  • 0 kudos
5 More Replies
ksenija
by Contributor
  • 2467 Views
  • 3 replies
  • 0 kudos

Foreign table to delta streaming table

I want to copy a table from a foreign catalog as my streaming table. This is the code I used but I am getting error: Table table_name does not support either micro-batch or continuous scan.; spark.readStream                .table(table_name)         ...

  • 2467 Views
  • 3 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

What is the underlying type of the table you are trying to stream from? Structured Streaming does not currently support streaming reads via JDBC, so reading from MySQL, Postgres, etc are not supported. If you are trying to perform stream ingestion fr...

  • 0 kudos
2 More Replies
Ramacloudworld
by New Contributor
  • 1835 Views
  • 1 replies
  • 0 kudos

cross join issue generating surrogate keys in delta table

I used below code to populate target table, it is working as expected but expect for surrogatekey column. After I inserted dummy entry -1 and did run merge code, it is generating the numbers in surrogatekey column like this 1,3,5,7,9 odd numbers, it ...

  • 1835 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

This is expected. Identity columns can have gaps, or per the documentation:  Values assigned by identity columns are unique and increment in the direction of the specified step, and in multiples of the specified step size, but are not guaranteed to b...

  • 0 kudos
joaquin12beard
by New Contributor II
  • 1197 Views
  • 4 replies
  • 2 kudos

Expose Databricks Cluster Logs to Grafana

I am trying to send Databricks cluster logs to Grafana using an init_script where I define the system journals to consume. The issue I am facing is that I cannot get the driver logs, standard error, and output to reach Grafana. Is there something spe...

  • 1197 Views
  • 4 replies
  • 2 kudos
Latest Reply
joaquin12beard
New Contributor II
  • 2 kudos

@Walter_C This is the log that is arriving.  

  • 2 kudos
3 More Replies
varunep
by New Contributor II
  • 845 Views
  • 3 replies
  • 0 kudos

Certification exam got suspended

Hello Team,My Data engineer associate exam got suspended within seconds of starting without any reason. After starting the exam screen got paused just after 10-20 seconds there was a notice that someone will contact you but no one contacted till the ...

  • 845 Views
  • 3 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

You need to raise a ticket with Databricks to get to a resolution: https://help.databricks.com/s/contact-us?ReqType=training%22%20%5Ct%20%22_blank

  • 0 kudos
2 More Replies
hrishiharsh25
by New Contributor
  • 731 Views
  • 1 replies
  • 0 kudos

Liquid Clustering

How can I use column for liquid clustering that is not in first 32 column of my delta table schema.

  • 731 Views
  • 1 replies
  • 0 kudos
Latest Reply
PotnuruSiva
Databricks Employee
  • 0 kudos

We can only specify columns with statistics collected for clustering keys. By default, the first 32 columns in a Delta table have statistics collected. See Specify Delta statistics columns. We can use the below workaround for your use case: 1. Use th...

  • 0 kudos
RajPutta
by New Contributor
  • 1115 Views
  • 1 replies
  • 0 kudos

Databricks migration

How easy to migrate from snowflake or redshift to Databricks ?

  • 1115 Views
  • 1 replies
  • 0 kudos
Latest Reply
thelogicplus
Contributor
  • 0 kudos

Hi @RajPutta  : it very easy to migrate any thing to Databrick only thing required is your team knowledge on databrick platform .   below steps is important.DiscoveryAssessment Code conversion and Migration is very importPoCif you want to migrate fro...

  • 0 kudos
shrikant_kulkar
by New Contributor III
  • 4173 Views
  • 2 replies
  • 2 kudos

c# connector for databricks Delta Sharing

Any plans for adding c# connector? What are alternate ways in current state? 

  • 4173 Views
  • 2 replies
  • 2 kudos
Latest Reply
Shawn_Eary
Contributor
  • 2 kudos

I'm having problems getting the REST API calls for Delta Sharing to work. Python and Power BI work fine but the C# code that Databricks AI generates does not work. I keep getting an "ENDPOINT NOT FOUND" error even though config.share is fine.A C# con...

  • 2 kudos
1 More Replies
Wijnand
by New Contributor II
  • 2109 Views
  • 1 replies
  • 0 kudos

Updates on a column in delta table with downstream autoloader

I've got the following questions:1. Can I pause autoloader jobs, delete cluster that was used to run these jobs, create a new cluster and run jobs with newer version cluster?2. I have one autoloader job that ingests JSONs and transforms this to a del...

  • 2109 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

Hello, 1.Yes you can pause the job, delete the cluster, upgrade versions of the cluster, etc. With Auto Loader and Structured Streaming the important thing is making sure that the checkpointLocation stays in tact, so no deletions, modifications, or m...

  • 0 kudos
Shree23
by New Contributor III
  • 2773 Views
  • 2 replies
  • 0 kudos

scalar function in databricks

Hi Expert,here is sql server scalar function how to convert in databricks functionSQLCREATE function [dbo].[gettrans](@PickupCompany nvarchar(2),@SupplyCountry int, @TxnSource nvarchar(10),@locId nvarchar(50), @ExternalSiteId nvarchar(50))RETURNS INT...

  • 2773 Views
  • 2 replies
  • 0 kudos
Latest Reply
MathieuDB
Databricks Employee
  • 0 kudos

Hello @Shree23 ,In Databricks, you can create scalar or tabular functions using SQL or Python. Here is the documentation .I converted your SQL Server function to Databricks standards. CREATE OR REPLACE FUNCTION gettrans( PickupCompany STRING, Sup...

  • 0 kudos
1 More Replies
OlekNV
by New Contributor
  • 2569 Views
  • 2 replies
  • 0 kudos

Enable system schemas

Hello All,I'm new with Databricks,Have an issue within enable system schemas. When run api call to check system schemas status in metastores -I see that all schemas in "Unavailable" state (except "information_schema", which is "ENABLE_COMPLETED").Is ...

  • 2569 Views
  • 2 replies
  • 0 kudos
Latest Reply
vaishalisai
New Contributor II
  • 0 kudos

I am also facing the same issues.

  • 0 kudos
1 More Replies
Ruby8376
by Valued Contributor
  • 6997 Views
  • 8 replies
  • 2 kudos

Expose delta table data to Salesforce - odata?

HI Looking for suggestiongs to stream on demand data from databricks delta tables to salesforce.Is odata a good option? 

  • 6997 Views
  • 8 replies
  • 2 kudos
Latest Reply
fegvilela
New Contributor II
  • 2 kudos

Hey, I think this might helphttps://www.salesforce.com/uk/news/press-releases/2024/04/25/zero-copy-partner-network/

  • 2 kudos
7 More Replies
Nandhini_Kumar
by New Contributor III
  • 5913 Views
  • 7 replies
  • 0 kudos

How to get databricks performance metrics programmatically?

How to retrieve all Databricks performance metrics on an hourly basis. Is there a recommended method or API available for retrieving performance metrics ?

  • 5913 Views
  • 7 replies
  • 0 kudos
Latest Reply
holly
Databricks Employee
  • 0 kudos

The spark logs are available through cluster logging. This is enabled at the cluster level for you to choose the destination for the logs.  Just a heads up - interpreting them at scale is not trivial. I'd recommend having a read through the overwatch...

  • 0 kudos
6 More Replies
Chris_Konsur
by New Contributor III
  • 3694 Views
  • 4 replies
  • 1 kudos

an autoloader in file notification mode to get files from S3 on AWS -Error

I configured an autoloader in file notification mode to get files from S3 on AWS.spark.readStream\.format("cloudFiles")\.option("cloudFiles.format", "json")\.option("cloudFiles.inferColumnTypes", "true")\.option("cloudFiles.schemaLocation", "dbfs:/au...

  • 3694 Views
  • 4 replies
  • 1 kudos
Latest Reply
Selz
New Contributor II
  • 1 kudos

In case anyone else stumbles across this, I was able to fix my issue by setting up an instance profile with the file notification permissions and attaching the instance profile to the job cluster. It wasn't clear from the documentation that the file ...

  • 1 kudos
3 More Replies
Ludo
by New Contributor III
  • 6870 Views
  • 4 replies
  • 3 kudos

[DeltaTable] Usage with Unity Catalog (ParseException)

Hi,I'm migrating my workspaces to Unity Catalog and the application to use three-level notation. (catalog.database.table)See: Tutorial: Delta Lake | Databricks on AWSI'm having the following exception when trying to use DeltaTable.forName(string name...

  • 6870 Views
  • 4 replies
  • 3 kudos
Latest Reply
Ludo
New Contributor III
  • 3 kudos

Thank you for the quick feedback @saipujari_spark Indeed, it's working great within a notebook with Databricks Runtime 13.2 which most likely has a custom behavior for unity catalog. It's not working in my scala application running in local with dire...

  • 3 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels