- 565 Views
- 0 replies
- 0 kudos
I am doing some investigation in how to connect Databricks and Stripe. Stirpe has really good documentation and I have decided to set up a webhook in Django as per their recommendation. This function handles events as they occur in stripe:-----------...
- 565 Views
- 0 replies
- 0 kudos
by
Munni
• New Contributor II
- 289 Views
- 0 replies
- 0 kudos
Hai,I need somehelp,I am reading csv file through pyspark ,in which one field encoded with double quotes,I should get that value along with double quotes.Spark version is 3.0.1.col1,col2,col3"A",""B,C"","D"-----------INPUTOUTPUT:A , "B,C" , D
- 289 Views
- 0 replies
- 0 kudos
- 1380 Views
- 2 replies
- 1 kudos
I have in issue in Pyspark.Pandas to report.Is there a github or some forum where I can register my issue?Here's the issue
- 1380 Views
- 2 replies
- 1 kudos
Latest Reply
Hi, @Krishna Zanwar Could you please raise a support case to report the bug. Please refer https://docs.databricks.com/resources/support.html to engage with Databricks Support.
1 More Replies
- 4512 Views
- 1 replies
- 2 kudos
I am configuring databricks_mws_credentials through Terraform on AWS. This used to work up to a couple days ago - now, I am getting "Error: cannot create mws credentials: Cannot complete request; user is unauthenticated".My user/pw/account credential...
- 4512 Views
- 1 replies
- 2 kudos
Latest Reply
Update: after changing the account password, the error went away. There seems to have been a temporary glitch in Databricks preventing Terraform from working with the old password - because the old password was correctly set up.Anyhow, now I have a w...
- 1168 Views
- 0 replies
- 2 kudos
Hello Team,I am trying to copy the xlx files from sharepoint and move to the Azure blob storageUSERNAME = app_config_client.get_configuration_setting(key='BIAppConfig:SharepointUsername',label='BIApp').valuePASSWORD = app_config_client.get_configurat...
- 1168 Views
- 0 replies
- 2 kudos
- 356 Views
- 0 replies
- 0 kudos
Data + AI World Tour Data + AI World Tour brings the data lakehouse to the global datacommunity. With content, customers and speakers tailored to eachregion, the tour showcases how and why the data lakehouse is quicklybecoming the cloud data archite...
- 356 Views
- 0 replies
- 0 kudos
- 6427 Views
- 18 replies
- 3 kudos
Hi there, I am developing a Cluster node initialization script (https://docs.gcp.databricks.com/clusters/init-scripts.html#environment-variables) in order to install some custom libraries.Reading the docs of Databricks we can get some environment var...
- 6427 Views
- 18 replies
- 3 kudos
Latest Reply
We can infer the cluster DBR version using the env $DATABRICKS_RUNTIME_VERSION. (For the exact spark/scala version mapping, you can refer to the specific DBR release notes)Sample usage inside a init script, DBR_10_4_VERSION="10.4"
if [[ "$DATABRICKS_...
17 More Replies
- 433 Views
- 0 replies
- 0 kudos
We had working code as below.print(f"{file_name}Before insert count", datetime.datetime.now(), scan_df_new.count())print(scan_df_new.show())scan_20220908120005_10Before insert count 2022-09-14 11:37:15.853588 3+-------------------+----------+--------...
- 433 Views
- 0 replies
- 0 kudos
- 2377 Views
- 5 replies
- 4 kudos
I've posted the same question on stack overflow to try to maximize reach here & potentially raise this issue to Databricks.I am trying to query delta tables from my AWS Glue Catalog on Databricks SQL Engine. They are stored in Delta Lake format. I ha...
- 2377 Views
- 5 replies
- 4 kudos
Latest Reply
Hi @Nick Agel Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
4 More Replies
by
tariq
• New Contributor III
- 3982 Views
- 4 replies
- 0 kudos
I'm not sure how a simple thing like importing a module in python can be so broken in such a product. First, I was able to make it work using the following:import sys
sys.path.append("/Workspace/Repos/Github Repo/sparkling-to-databricks/src")
from ut...
- 3982 Views
- 4 replies
- 0 kudos
Latest Reply
I too wonder the same thing. How can importing a python module be so difficult and not even documented lol.No need for libraries..Here's what worked for me..Step1: Upload the module by first opening a notebook >> File >> Upload Data >> drag and drop ...
3 More Replies
- 844 Views
- 0 replies
- 4 kudos
Hi all,I'm trying to run some functions from another notebook (data_process_notebook) in my main notebook, using the %run command command. When I run the command: %run ../path/to/data_process_notebook, it is able to complete successfully, no path, pe...
- 844 Views
- 0 replies
- 4 kudos
- 14461 Views
- 5 replies
- 0 kudos
Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)> python --versionPython 3.10.4This error seems to be coming from the thrift backend. I suspect but have not confirmed that t...
- 14461 Views
- 5 replies
- 0 kudos
Latest Reply
I have the same issue and tried the solution mentioned above. It still did not work. I am getting below errorError: ('HY000', '[HY000] [Simba][ThriftExtension] (14) Unexpected response from server during a HTTP connection: SSL_connect: certificate ve...
4 More Replies
- 1522 Views
- 1 replies
- 1 kudos
I have setup a spring boot application which works as expected as a standalone spring boot app.When i build the jar and try to set it up as a databricks job, i am facing these issues.i am getting same error in local as well.I have tried using maven-s...
- 1522 Views
- 1 replies
- 1 kudos
Latest Reply
Atanu
Esteemed Contributor
could you please try with python terminal and see how that behaves?I am not 100% sure if this is relates to your use case.@Dinesh L
- 1261 Views
- 3 replies
- 0 kudos
I need duplicated a new job create in stage A in another stage, automatically. is posible?
- 1261 Views
- 3 replies
- 0 kudos
Latest Reply
Atanu
Esteemed Contributor
you may try to get the job details from our job api https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsGet and get the response to duplicate it.,
2 More Replies
- 1257 Views
- 3 replies
- 0 kudos
Hello everybody,I recently discovered (the hard way) that when a query plan uses cached data, the AQE does not kick-in. Result is that you loose the super cool feature of dynamic partition coalesce (no more custom shuffle readers in the DAG). Is ther...
- 1257 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @Pantelis Maroudis,Did you check the physical query plan? did you check the SQL sub tab with in Spark UI? it will help you to undertand better what is happening.
2 More Replies