cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

190809
by Contributor
  • 565 Views
  • 0 replies
  • 0 kudos

Pulling Data From Stripe to Databricks using the Webhook

I am doing some investigation in how to connect Databricks and Stripe. Stirpe has really good documentation and I have decided to set up a webhook in Django as per their recommendation. This function handles events as they occur in stripe:-----------...

  • 565 Views
  • 0 replies
  • 0 kudos
Munni
by New Contributor II
  • 289 Views
  • 0 replies
  • 0 kudos

Hai,I need somehelp,I am reading csv file through pyspark ,in which one field encoded with double quotes,I should get that value along with double quo...

Hai,I need somehelp,I am reading csv file through pyspark ,in which one field encoded with double quotes,I should get that value along with double quotes.Spark version is 3.0.1.col1,col2,col3"A",""B,C"","D"-----------INPUTOUTPUT:A , "B,C" , D

  • 289 Views
  • 0 replies
  • 0 kudos
KrishZ
by Contributor
  • 1380 Views
  • 2 replies
  • 1 kudos

Where to report a bug with Databricks ?

I have in issue in Pyspark.Pandas to report.Is there a github or some forum where I can register my issue?Here's the issue

  • 1380 Views
  • 2 replies
  • 1 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 1 kudos

Hi, @Krishna Zanwar​ Could you please raise a support case to report the bug. Please refer https://docs.databricks.com/resources/support.html to engage with Databricks Support.

  • 1 kudos
1 More Replies
Andrei_Radulesc
by Contributor III
  • 4512 Views
  • 1 replies
  • 2 kudos

Resolved! Error: cannot create mws credentials: Cannot complete request; user is unauthenticated

I am configuring databricks_mws_credentials through Terraform on AWS. This used to work up to a couple days ago - now, I am getting "Error: cannot create mws credentials: Cannot complete request; user is unauthenticated".My user/pw/account credential...

  • 4512 Views
  • 1 replies
  • 2 kudos
Latest Reply
Andrei_Radulesc
Contributor III
  • 2 kudos

Update: after changing the account password, the error went away. There seems to have been a temporary glitch in Databricks preventing Terraform from working with the old password - because the old password was correctly set up.Anyhow, now I have a w...

  • 2 kudos
RohitKulkarni
by Contributor
  • 1168 Views
  • 0 replies
  • 2 kudos

Get file from SharePoint to copy into Azure blob storage

Hello Team,I am trying to copy the xlx files from sharepoint and move to the Azure blob storageUSERNAME = app_config_client.get_configuration_setting(key='BIAppConfig:SharepointUsername',label='BIApp').valuePASSWORD = app_config_client.get_configurat...

  • 1168 Views
  • 0 replies
  • 2 kudos
Anonymous
by Not applicable
  • 356 Views
  • 0 replies
  • 0 kudos

Data + AI World Tour �� ✈️ Data + AI World Tour brings the data lakehouse to the global datacommunity. With content, customers and speakers tai...

Data + AI World Tour Data + AI World Tour brings the data lakehouse to the global datacommunity. With content, customers and speakers tailored to eachregion, the tour showcases how and why the data lakehouse is quicklybecoming the cloud data archite...

  • 356 Views
  • 0 replies
  • 0 kudos
ahuarte
by New Contributor III
  • 6427 Views
  • 18 replies
  • 3 kudos

Resolved! Getting Spark & Scala version in Cluster node initialization script

Hi there, I am developing a Cluster node initialization script (https://docs.gcp.databricks.com/clusters/init-scripts.html#environment-variables) in order to install some custom libraries.Reading the docs of Databricks we can get some environment var...

  • 6427 Views
  • 18 replies
  • 3 kudos
Latest Reply
Lingesh
New Contributor III
  • 3 kudos

We can infer the cluster DBR version using the env $DATABRICKS_RUNTIME_VERSION. (For the exact spark/scala version mapping, you can refer to the specific DBR release notes)Sample usage inside a init script, DBR_10_4_VERSION="10.4" if [[ "$DATABRICKS_...

  • 3 kudos
17 More Replies
nickagel
by New Contributor III
  • 2377 Views
  • 5 replies
  • 4 kudos

AWS Glue Catalog w/ Delta Tables Connected to Databricks SQL Engine - Incompatible format detected.

I've posted the same question on stack overflow to try to maximize reach here & potentially raise this issue to Databricks.I am trying to query delta tables from my AWS Glue Catalog on Databricks SQL Engine. They are stored in Delta Lake format. I ha...

  • 2377 Views
  • 5 replies
  • 4 kudos
Latest Reply
Vidula
Honored Contributor
  • 4 kudos

Hi @Nick Agel​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 4 kudos
4 More Replies
tariq
by New Contributor III
  • 3982 Views
  • 4 replies
  • 0 kudos

Importing python module

I'm not sure how a simple thing like importing a module in python can be so broken in such a product. First, I was able to make it work using the following:import sys sys.path.append("/Workspace/Repos/Github Repo/sparkling-to-databricks/src") from ut...

  • 3982 Views
  • 4 replies
  • 0 kudos
Latest Reply
KrishZ
Contributor
  • 0 kudos

I too wonder the same thing. How can importing a python module be so difficult and not even documented lol.No need for libraries..Here's what worked for me..Step1: Upload the module by first opening a notebook >> File >> Upload Data >> drag and drop ...

  • 0 kudos
3 More Replies
mattmunz
by New Contributor III
  • 14461 Views
  • 5 replies
  • 0 kudos

How can I resolve this SSL error which occurrs when calling databricks-sql-connector/databricks.sql.connect() from my python app?

Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)>  python --versionPython 3.10.4This error seems to be coming from the thrift backend. I suspect but have not confirmed that t...

  • 14461 Views
  • 5 replies
  • 0 kudos
Latest Reply
ziggy
New Contributor II
  • 0 kudos

I have the same issue and tried the solution mentioned above. It still did not work. I am getting below errorError: ('HY000', '[HY000] [Simba][ThriftExtension] (14) Unexpected response from server during a HTTP connection: SSL_connect: certificate ve...

  • 0 kudos
4 More Replies
DineshNO
by New Contributor II
  • 1522 Views
  • 1 replies
  • 1 kudos

How to execute a spark-submit on a databricks job for a spring-boot jar built using maven. Failing with error : Error: Failed to load class com.****.settlement.jobs.EntryPoint.

I have setup a spring boot application which works as expected as a standalone spring boot app.When i build the jar and try to set it up as a databricks job, i am facing these issues.i am getting same error in local as well.I have tried using maven-s...

  • 1522 Views
  • 1 replies
  • 1 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 1 kudos

could you please try with python terminal and see how that behaves?I am not 100% sure if this is relates to your use case.@Dinesh L​ 

  • 1 kudos
F29
by New Contributor
  • 1261 Views
  • 3 replies
  • 0 kudos

is possible duplicated a new job in another stage Databricks? Devops or some way?

I need duplicated a new job create in stage A in another stage, automatically. is posible?

  • 1261 Views
  • 3 replies
  • 0 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 0 kudos

you may try to get the job details from our job api https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsGet and get the response to duplicate it.,

  • 0 kudos
2 More Replies
pantelis_mare
by Contributor III
  • 1257 Views
  • 3 replies
  • 0 kudos

Spark 3 AQE and cache

Hello everybody,I recently discovered (the hard way) that when a query plan uses cached data, the AQE does not kick-in. Result is that you loose the super cool feature of dynamic partition coalesce (no more custom shuffle readers in the DAG). Is ther...

  • 1257 Views
  • 3 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

Hi @Pantelis Maroudis​,Did you check the physical query plan? did you check the SQL sub tab with in Spark UI? it will help you to undertand better what is happening.

  • 0 kudos
2 More Replies
Labels
Top Kudoed Authors