cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

anonymous_567
by New Contributor II
  • 98 Views
  • 1 replies
  • 0 kudos

Ingesting Non-Incremental Data into Delta

Hello,I have non-incremental data landing in a storage account. This data contains old data from before as well as new data. I would like to avoid doing a complete table deletion and table creation just to upload the data from storage and have an upd...

  • 98 Views
  • 1 replies
  • 0 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 0 kudos

Well, if you know the conditions to separate new data from old data, then while reading the data in to your dataframe, use filter or where clause to select new data and ingest it in to your delta table.This is how you can do in general. But if you ha...

  • 0 kudos
Mustafa_Kamal
by New Contributor II
  • 31 Views
  • 1 replies
  • 0 kudos

Parameterizing DLT Pipelines

Hi Everyone,I have DLTP pipeline which I need to execute for difference source systems. Need advise on how to parametrize this.I have gone through many articles on the web, but it seems there is no accurate information available.Can anyone please hel...

  • 31 Views
  • 1 replies
  • 0 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 0 kudos

You can provide parameters in the configuration section of DLT pipeline and access it in your code using spark.conf.get(<parameter_name>).Parameterize DLT pipelines 

  • 0 kudos
databrciks
by Visitor
  • 154 Views
  • 0 replies
  • 0 kudos

Databrciks: failure logs

Hello Team,I am new to Databrciks. Generally where all the logs will be stored in Databricks. I see if any job fails below the command i could see some error messages.Otherwise in real time how to check the log files/error messages in Databricks UI.T...

  • 154 Views
  • 0 replies
  • 0 kudos
k2
by New Contributor
  • 196 Views
  • 1 replies
  • 0 kudos

log delivery are not creating data in s3 bucket

Hiii, Does anyone have an idea about the typical duration for Databricks to create logs in an S3 bucket using the databricks_mws_log_delivery Terraform resource? I've implemented the code provided in the Databricks official documentation, but I've be...

  • 196 Views
  • 1 replies
  • 0 kudos
Latest Reply
k2
New Contributor
  • 0 kudos

The issue has been resolved. There was no problem with the code or the API. However, it took over 12 hours for logs to start appearing in my bucket, despite Databricks documentation indicating that logs should appear within 1 hour..Thank you!

  • 0 kudos
TheIceBrick
by New Contributor III
  • 2391 Views
  • 3 replies
  • 1 kudos

Is there a (request-) size limit for the Databricks Rest Api Sql statements?

When inserting rows through the Sql Api (/api/2.0/sql/statements/), when more than a certain number of records (about 25 records with 8 small columns) are included in the statement, the call fails with the error:"The request could not be processed by...

Community Discussions
REST API
Sql Statements
  • 2391 Views
  • 3 replies
  • 1 kudos
Latest Reply
ChrisCkx
Visitor
  • 1 kudos

@TheIceBrick did you find out anything else about this?I am experiencing exactly the same, I can insert up to 35 rows but break at about 50 rows.The payload size is 42KB, I am passing parameters for each row.@Debayan This is no where near the 16MiB /...

  • 1 kudos
2 More Replies
Ruby8376
by Valued Contributor
  • 515 Views
  • 7 replies
  • 1 kudos

Expose delta table data to Salesforce - odata?

HI Looking for suggestiongs to stream on demand data from databricks delta tables to salesforce.Is odata a good option? 

  • 515 Views
  • 7 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

I see, is there a possibility in SF to define an external location/datasource?Just guessing here, as these type of packages are really good in isolating data, not integrating it.

  • 1 kudos
6 More Replies
jenshumrich
by New Contributor III
  • 195 Views
  • 2 replies
  • 0 kudos

Long running jobs get lost

Hello,I tried to schedule a long running job and surprisingly it does seem to neither terminate (and thus does not let the cluster shut down), nor continue running, even though the state is still "Running":But the truth is that the job has miserably ...

jenshumrich_0-1712742957610.png jenshumrich_2-1712743008070.png jenshumrich_3-1712743098546.png
  • 195 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 0 kudos

Have you looked at the sql plan to see what the  spark job 72 was doing?

  • 0 kudos
1 More Replies
chari
by Contributor
  • 51 Views
  • 3 replies
  • 0 kudos

Reading csv file with spark throws [insufficient privelage] error

Hello Community,I have some csv files saved in databricks workspace and want to read them with spark. I make use of the commanddf = spark.read.format('csv').load(r'filepath') However, it throws the error.org.apache.spark.SparkSecurityException: [INSU...

  • 51 Views
  • 3 replies
  • 0 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 0 kudos

If this a UC enabled workspace, you need to provide the right access.

  • 0 kudos
2 More Replies
thilanka02
by New Contributor
  • 109 Views
  • 2 replies
  • 1 kudos

Resolved! Spark read CSV does not throw Exception if the file path is not available in Databricks 14.3

We were using this method and this was working as expected in Databricks 13.3.  def read_file(): try: df_temp_dlr_kpi = spark.read.load(raw_path,format="csv", schema=kpi_schema) return df_temp_dlr_kpi except Exce...

Screenshot 2024-04-19 at 13.29.19.png
  • 109 Views
  • 2 replies
  • 1 kudos
Latest Reply
thilanka02
New Contributor
  • 1 kudos

Thank you @daniel_sahal for the reply

  • 1 kudos
1 More Replies
liormayn
by New Contributor
  • 141 Views
  • 1 replies
  • 0 kudos

OSError: [Errno 78] Remote address changed

Hello:)as part of deploying an app that previously ran directly on emr to databricks, we are running experiments using LTS 9.1, and getting the following error: PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationEr...

  • 141 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

@liormayn  - could you please let us know if you had a chance to run it on DBR 10.4 LTS?

  • 0 kudos
Ajay-Pandey
by Esteemed Contributor III
  • 668 Views
  • 3 replies
  • 2 kudos

Resolved! Update regarding Community Reward Store

Hi Team,Is there any update on the Community Reward Store, as it's been discontinued from the old portal, and we still can't see the new portal for that.Is there any expected date when this will be available for community members?

  • 668 Views
  • 3 replies
  • 2 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 2 kudos

Thanks for update.

  • 2 kudos
2 More Replies
anonymous_567
by New Contributor II
  • 176 Views
  • 3 replies
  • 0 kudos

Autoloader update table when new changes are made

Hello,Everyday a new file of the same name gets sent to my storage account with old and new data appended at the end.  Columns may also be added during one of these file updates. This file does a complete overwrite of the previous file. Is it possibl...

  • 176 Views
  • 3 replies
  • 0 kudos
Latest Reply
data-grassroots
New Contributor
  • 0 kudos

This may be helpful - the bit on allow overwritehttps://docs.databricks.com/en/ingestion/auto-loader/faq.html

  • 0 kudos
2 More Replies
Miguel_Grafana
by New Contributor
  • 70 Views
  • 0 replies
  • 0 kudos

Azure Oauth Passthrough with the Go Driver

Can anyone point me towards some resources for achieving this? I already have the token.Trying with: dbsql.WithAccessToken(settings.Token)But I'm getting the following error:Unable to load OAuth Config: request error after 1 attempt(s): unexpected HT...

  • 70 Views
  • 0 replies
  • 0 kudos
Alexandru
by New Contributor II
  • 278 Views
  • 3 replies
  • 0 kudos

Resolved! vscode python project for development

Hi,I'm trying to set up a local development environment using python / vscode / poetry. Also, linting is enabled (Microsoft pylance extension) and the python.analysis.typeCheckingMode is set to strict.We are using python files for our code (.py) whit...

  • 278 Views
  • 3 replies
  • 0 kudos
Latest Reply
artsheiko
Valued Contributor III
  • 0 kudos

Hi Alexandru, Take a look at VSCode extension for Databricks : https://marketplace.visualstudio.com/items?itemName=databricks.databricks 

  • 0 kudos
2 More Replies