- 1884 Views
- 5 replies
- 11 kudos
Hi, I'm running couple of Notebooks in my pipeline and I would like to set fixed value of 'spark.sql.shuffle.partitions' - same value for every notebook. Should I do that by adding spark.conf.set.. code in each Notebook (Runtime SQL configurations ar...
- 1884 Views
- 5 replies
- 11 kudos
Latest Reply
Hi, Thank you all for the tips. I tried before to set this option in Spark Config but didn't work for some reason. Today I tried again and it's working :).
4 More Replies
by
SRS
• New Contributor II
- 1990 Views
- 3 replies
- 5 kudos
Hello,Does anyone tried to create an incremental backup on delta tables? What I mean is to load into the backup storage only the latest parquet files part of the Delta Table and to refresh the _delta_log folder, instead of copying the whole files aga...
- 1990 Views
- 3 replies
- 5 kudos
Latest Reply
Hi @Stefan Stegaru​ ,You can use Delta time travel to query the data that was just added on a specific version. Then like @Hubert Dudek​ mentioned, you can copy over this sub set of data to a new table or a new location. You will need to do a deep...
2 More Replies
- 2417 Views
- 3 replies
- 5 kudos
Not able to Find or Enable "Files in Repos" feature in the workspace, What could be the reason
- 2417 Views
- 3 replies
- 5 kudos
- 1616 Views
- 4 replies
- 2 kudos
We're in the process of migrating a large graph computation workload to nvidia RAPIDS + cuGraph for GPU acceleration. The package isn't a part of the base runtime and it is available by conda package management only, so can't be installed via init sc...
- 1616 Views
- 4 replies
- 2 kudos
Latest Reply
Thanks @Prabakar Ammeappin​ , we're looking at this. Strangely, the last commit removed the rapids libraries from the base cuda-images. We're adding them back in.
3 More Replies
by
JK2021
• New Contributor III
- 2323 Views
- 5 replies
- 3 kudos
We are planning to customise code on Databricks to call Salesforce bulk API 2.0 to load data from databricks delta table to Salesforce.My question is : All the exception handling, retries and all around Bulk API can be coded explicitly in Data bricks...
- 2323 Views
- 5 replies
- 3 kudos
Latest Reply
Hi @Jazmine Kochan​ , I haven't tried Salesforce bulk API 2.0 to load data. But in theory, it should be fine.
4 More Replies
- 2835 Views
- 5 replies
- 1 kudos
I want to get a mail notification at the end of each day for when my Databricks job has finished running and for that I need to extract the time of it's completion and it's status. How can I achieve that?
- 2835 Views
- 5 replies
- 1 kudos
Latest Reply
Hi @Yatharth Kaushik​ you can use the JobsRunList API to get all the information of the job run. You can write a code to extract the information that you need for the table.The are multiple API's in the same doc that you can use to get information a...
4 More Replies
by
RantoB
• Valued Contributor
- 4626 Views
- 5 replies
- 0 kudos
Hi, How is that possible to disable SSL Certification.With databricks API I got this error :SSLCertVerificationError
SSLCertVerificationError: ("hostname 'https' doesn't match either of '*.numericable.fr', 'numericable.fr'",)
MaxRetryError: HTTPS...
- 4626 Views
- 5 replies
- 0 kudos
Latest Reply
@Bertrand BURCKER​ - Thanks for letting us know your issue is resolved. If @Prabakar Ammeappin​'s answer solved the problem, would you be happy to mark his answer as best so others can more easily find an answer for this?
4 More Replies
- 235 Views
- 0 replies
- 0 kudos
Sell Handbags from Home. Free up your wardrobe space by selling a few preloved handbags at the best prices. Visit Sell Your Bag now!
- 235 Views
- 0 replies
- 0 kudos
- 9895 Views
- 6 replies
- 6 kudos
I'm trying to export a csv file from my Databricks workspace to my laptop.I have followed the below steps. 1.Installed databricks CLI2. Generated Token in Azure Databricks3. databricks configure --token5. Token:xxxxxxxxxxxxxxxxxxxxxxxxxx6. databrick...
- 9895 Views
- 6 replies
- 6 kudos
Latest Reply
Hi @Sarvagna Mahakali​ There is an easier hack: a) You can save results locally on the disk and create a hyper link for downloading CSV . You can copy the file to this location: dbfs:/FileStore/table1_good_2020_12_18_07_07_19.csvb) Then download with...
5 More Replies
by
DB_007
• New Contributor III
- 5017 Views
- 8 replies
- 4 kudos
I have a cluster running on 7.3 LTS and it has about 35+ databases. When i tried to setup an endpoint on Databricks SQL, i do not see any database listed.
- 5017 Views
- 8 replies
- 4 kudos
Latest Reply
hi @Arif Ali​ You may have to check the data access config to add the params for external metastore: spark.hadoop.javax.jdo.option.ConnectionDriverName org.mariadb.jdbc.Driverspark.hadoop.javax.jdo.option.ConnectionUserName <mysql-username>spark.had...
7 More Replies
- 2098 Views
- 5 replies
- 8 kudos
I work with Spark-Scala and I receive Data in different formats ( .csv/.xlxs/.txt etc ), when I try to read/write this data from different sources to a any database, many records got rejected due to various issues like (special characters, data type ...
- 2098 Views
- 5 replies
- 8 kudos
Latest Reply
or maybe schema evolution on delta lake is enough, in combination with Hubert's answer
4 More Replies
- 3960 Views
- 8 replies
- 3 kudos
Hi Guys. I have looked at the formatting options and I'm still struggling to work out how to best format the email body of a databricks alert. I want to be able to selectively choose columns from the query and dispaly them in a table. Or even if i ca...
- 3960 Views
- 8 replies
- 3 kudos
Latest Reply
Hi @Nick Hughes​ , unfortunately, this is not available for now. We have a feature request for the same. DB-I-4105 - SQL Alerts: Formatting message body when creating Custom TemplateThis feature has been considered by our product team and it will be...
7 More Replies