cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Yogybricks
by New Contributor II
  • 1011 Views
  • 2 replies
  • 0 kudos
  • 1011 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Yogybricks  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 0 kudos
1 More Replies
zsucic1
by New Contributor III
  • 1533 Views
  • 2 replies
  • 0 kudos

Resolved! Trigger file_arrival of job on Delta Lake table change

Is there a way to avoid having to create an external data location Simply to trigger a job when new data comes to a specific Delta Lake table?

  • 1533 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @zsucic1  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 0 kudos
1 More Replies
SaraCorralLou
by New Contributor III
  • 5575 Views
  • 7 replies
  • 2 kudos

Resolved! dbutils.fs.mv - 1 folder and 1 file with the same name and only move the folder

Hello!I am contacting you because of the following problem I am having:In an ADLS folder I have two items, a folder and an automatically generated Block blob file with the same name as the folder.I want to use the dbutils.fs.mv command to move the fo...

  • 5575 Views
  • 7 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @SaraCorralLou  Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answer...

  • 2 kudos
6 More Replies
KalingaSena
by New Contributor II
  • 1671 Views
  • 3 replies
  • 0 kudos

Not able to execute below SQL query in databricks notebook because of Pare error

Hi Team,I am unable to run the below command and it is giving me a parse error. Can any one point out the issue with the code:   

KalingaSena_1-1689140837096.png
  • 1671 Views
  • 3 replies
  • 0 kudos
Latest Reply
BkP
Contributor
  • 0 kudos

Hi,From the error , it looks like there is no space between the brackets and the "in" keyword after the where clause. Can you please try again see if you facing the same error.  

  • 0 kudos
2 More Replies
TheoDeSo
by New Contributor III
  • 6587 Views
  • 7 replies
  • 5 kudos

Resolved! Error on Azure-Databricks write output to blob storage account

Hello,After implementing the use of Secret Scope to store Secrets in an azure key vault, i faced a problem.When writting an output to the blob i get the following error:shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Unable to access con...

  • 6587 Views
  • 7 replies
  • 5 kudos
Latest Reply
TheoDeSo
New Contributor III
  • 5 kudos

Hi all thank you for the suggestions. Doing This spark.conf.set("fs.azure.account.key.{storage_account}.dfs.core.windows.net", "{myStorageAccountKey}")For the hadoop configuration does not work.And the suggestion of @Tharun-Kumar would suggest to har...

  • 5 kudos
6 More Replies
apiury
by New Contributor III
  • 1017 Views
  • 2 replies
  • 1 kudos

Consume gold data layer from web application

Hello!We are developing a web application in .NET, we need to consume data in gold layer, (as if we had a relational database), how can we do it? export data to sql server from gold layer?

  • 1017 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @apiury  Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your ...

  • 1 kudos
1 More Replies
NithinTiruveedh
by New Contributor II
  • 15037 Views
  • 12 replies
  • 0 kudos

How can I split a Spark Dataframe into n equal Dataframes (by rows)? I tried to add a Row ID column to acheive this but was unsuccessful.

I have a dataframe that has 5M rows. I need to split it up into 5 dataframes of ~1M rows each. This would be easy if I could create a column that contains Row ID. Is that possible?

  • 15037 Views
  • 12 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @NithinTiruveedh  Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answ...

  • 0 kudos
11 More Replies
babyhari
by New Contributor II
  • 1067 Views
  • 2 replies
  • 3 kudos

Databricks streaming dataframe into Snowflake

Any suggestions on how to stream data from databricks into snowflake?. Is snowpipe is the only option?. Snowpipe is not faster since it runs copy into in a small batch intervals and not in few seconds. If no option other than snowpipe, how to call it...

  • 1067 Views
  • 2 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @babyhari  Hope everything is going great. Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we ...

  • 3 kudos
1 More Replies
Gil
by New Contributor III
  • 2877 Views
  • 10 replies
  • 7 kudos

DLT optimize and vacuum

We were finally able to get DLT pipelines to run the optimize and vacuum automatically.  We verified this via the the table history.   However I am able to still query versions older than 7 days.   Has anyone been experiencing this and how were you a...

  • 2877 Views
  • 10 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

Hi @Gil  Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.  We'd love to hear from you. Thanks!

  • 7 kudos
9 More Replies
Anonymous
by Not applicable
  • 1639 Views
  • 3 replies
  • 5 kudos

 Dear @Werner Stinckens​  and @Tyler Retzlaff​  We would like to express our gratitude for your participation and dedication in the Databricks Commun...

 Dear @Werner Stinckens​  and @Tyler Retzlaff​ We would like to express our gratitude for your participation and dedication in the Databricks Community last week. Your interactions with customers have been valuable and we truly appreciate the time...

Screenshot 2023-06-13 at 8.42.49 PM
  • 1639 Views
  • 3 replies
  • 5 kudos
Latest Reply
dplante
Contributor II
  • 5 kudos

Congratulations guys!

  • 5 kudos
2 More Replies
Constantine
by Contributor III
  • 4451 Views
  • 2 replies
  • 4 kudos

Resolved! How does merge schema work

Let's say I create a table like CREATE TABLE IF NOT EXISTS new_db.data_table ( key STRING, value STRING, last_updated_time TIMESTAMP ) USING DELTA LOCATION 's3://......';Now when I insert into this table I insert data which has say 20 columns a...

  • 4451 Views
  • 2 replies
  • 4 kudos
Latest Reply
timdriscoll22
New Contributor II
  • 4 kudos

I tried running "REFRESH TABLE tablename;" but I still do not see the added columns in the data explorer columns, while I do see the added columns in the sample data 

  • 4 kudos
1 More Replies
pjain
by New Contributor II
  • 1864 Views
  • 4 replies
  • 0 kudos

_sqldf value in case of query failure in %sql cell

I am trying to write a code for Error Handling in Databricks notebook in case of a SQL magic cell failure. I have a %sql cell followed by some python code in next cells. I want to abort the notebook if the query in %sql cell fails. To do so I am look...

  • 1864 Views
  • 4 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @pjain  We haven't heard from you since the last response from @daniel_sahal â€‹, and I was checking back to see if her suggestions helped you. Or else, If you have any solution, please share it with the community, as it can be helpful to others.  A...

  • 0 kudos
3 More Replies
GC-James
by Contributor II
  • 6048 Views
  • 17 replies
  • 5 kudos

Resolved! Lost memory when using dbutils

Why does copying a 9GB file from a container to the /dbfs lose me 50GB of memory? (Which doesn't come back until I restarted the cluster)

image
  • 6048 Views
  • 17 replies
  • 5 kudos
Latest Reply
AdrianP
New Contributor II
  • 5 kudos

Hi James,Did you get to the bottom of this? We are experiencing the same issue, and all the suggested solutions don't seem to work.Thanks,Adrian

  • 5 kudos
16 More Replies
Vadim1
by New Contributor III
  • 1996 Views
  • 4 replies
  • 3 kudos

Resolved! Error on Azure-Databricks write RDD to storage account with wsabs://

Hi, I'm trying to write data from RDD to the storage account:Adding storage account key:spark.conf.set("fs.azure.account.key.y.blob.core.windows.net", "myStorageAccountKey")Read and write to the same storage:val path = "wasbs://x@y.blob.core.windows....

  • 1996 Views
  • 4 replies
  • 3 kudos
Latest Reply
TheoDeSo
New Contributor III
  • 3 kudos

Hello @Vadim1 and @User16764241763. I'm wondering if you find a way to avoid adding the hardcoded key in the advanced options spark config section in the cluster configuration. Is there a similar command to spark.conf.set("spark.hadoop.fs.azure.accou...

  • 3 kudos
3 More Replies
Labels
Top Kudoed Authors