cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Confused
by New Contributor III
  • 4404 Views
  • 6 replies
  • 1 kudos

Hi Guys Is there any documentation on where the /databricks-datasets/ mount is actually served from?We are looking at locking down where our workspace...

Hi GuysIs there any documentation on where the /databricks-datasets/ mount is actually served from?We are looking at locking down where our workspace can reach out to via the internet and as it currently stands we are unable to reach this.I did look ...

  • 4404 Views
  • 6 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hello Mat, Thanks for letting us know. Would you be happy to mark your answer as best if that will solve the problem for others? That way, members will be able to find the solution more easily.

  • 1 kudos
5 More Replies
MadelynM
by Databricks Employee
  • 2734 Views
  • 2 replies
  • 1 kudos

2021-08-Best-Practices-for-Your-Data-Architecture-v3-OG-1200x628

Thanks to everyone who joined the Best Practices for Your Data Architecture session on Getting Workloads to Production using CI/CD. You can access the on-demand session recording here, and the code in the Databricks Labs CI/CD Templates Repo. Posted ...

  • 2734 Views
  • 2 replies
  • 1 kudos
Latest Reply
MadelynM
Databricks Employee
  • 1 kudos

Here's the embedded links list!Jobs scheduling and orchestrationBuilt-in job scheduling: https://docs.databricks.com/jobs.html#schedule-a-job Periodic scheduling of the jobsExecute notebook / jar / Python script / Spark-submitMultitask JobsExecute no...

  • 1 kudos
1 More Replies
raymund
by New Contributor III
  • 4060 Views
  • 7 replies
  • 5 kudos

Resolved! Why adding the package 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1' failed in runtime 9.1.x-scala2.12 but was successful using runtime 8.2.x-scala2.12 ?

Using Databricks spark submit job, setting new cluster1] "spark_version": "8.2.x-scala2.12" => OK, works fine2] "spark_version": "9.1.x-scala2.12" => FAIL, with errorsException in thread "main" java.lang.ExceptionInInitializerError at com.databricks...

  • 4060 Views
  • 7 replies
  • 5 kudos
Latest Reply
raymund
New Contributor III
  • 5 kudos

this has been resolved by adding the following spark_conf (not thru --conf) "spark.hadoop.fs.file.impl": "org.apache.hadoop.fs.LocalFileSystem"example:------"new_cluster": { "spark_version": "9.1.x-scala2.12", ... "spark_conf": { "spar...

  • 5 kudos
6 More Replies
antoooks
by New Contributor III
  • 2903 Views
  • 2 replies
  • 4 kudos

Resolved! display() function always return connection refused on tunneling despite successfully retrieving the schema

Hi everyone,I am using SSH tunnelling with SSHTunnelForwarder to reach a target AWS RDS PostgreSQL database. The connection got through, however when I tried to display the retrieved data frame it always throws "connection refused" error. Please see ...

image.png
  • 2903 Views
  • 2 replies
  • 4 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 4 kudos

hi @Kurnianto Trilaksono Sutjipto​ ,This seems like a connectivity issue with the url you are trying to connect to. It fails during the display() command because read is a lazy transformation and it will not be executed right away. On the other hand,...

  • 4 kudos
1 More Replies
Leszek
by Contributor
  • 4323 Views
  • 5 replies
  • 11 kudos

Resolved! Runtime SQL Configuration - how to make it simple

Hi, I'm running couple of Notebooks in my pipeline and I would like to set fixed value of 'spark.sql.shuffle.partitions' - same value for every notebook. Should I do that by adding spark.conf.set.. code in each Notebook (Runtime SQL configurations ar...

  • 4323 Views
  • 5 replies
  • 11 kudos
Latest Reply
Leszek
Contributor
  • 11 kudos

Hi, Thank you all for the tips. I tried before to set this option in Spark Config but didn't work for some reason. Today I tried again and it's working :).

  • 11 kudos
4 More Replies
SRS
by New Contributor II
  • 3747 Views
  • 3 replies
  • 5 kudos

Resolved! Delta Tables incremental backup method

Hello,Does anyone tried to create an incremental backup on delta tables? What I mean is to load into the backup storage only the latest parquet files part of the Delta Table and to refresh the _delta_log folder, instead of copying the whole files aga...

  • 3747 Views
  • 3 replies
  • 5 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 5 kudos

Hi @Stefan Stegaru​ ,You can use Delta time travel to query the data that was just added on a specific version. Then like @Hubert Dudek​  mentioned, you can copy over this sub set of data to a new table or a new location. You will need to do a deep...

  • 5 kudos
2 More Replies
Anonymous
by Not applicable
  • 3240 Views
  • 4 replies
  • 2 kudos

Resolved! Anyone using RAPIDS and cuGraph on a current runtime?

We're in the process of migrating a large graph computation workload to nvidia RAPIDS + cuGraph for GPU acceleration. The package isn't a part of the base runtime and it is available by conda package management only, so can't be installed via init sc...

  • 3240 Views
  • 4 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Thanks @Prabakar Ammeappin​ , we're looking at this. Strangely, the last commit removed the rapids libraries from the base cuda-images. We're adding them back in.

  • 2 kudos
3 More Replies
yatharth29
by New Contributor II
  • 5530 Views
  • 3 replies
  • 0 kudos

How can I extract/get the time, along with the status (Failed or Succeeded) into a table for every time my Databricks job finishes running?

I want to get a mail notification at the end of each day for when my Databricks job has finished running and for that I need to extract the time of it's completion and it's status. How can I achieve that?

  • 5530 Views
  • 3 replies
  • 0 kudos
Latest Reply
Prabakar
Databricks Employee
  • 0 kudos

Hi @Yatharth Kaushik​  you can use the JobsRunList API to get all the information of the job run. You can write a code to extract the information that you need for the table.The are multiple API's in the same doc that you can use to get information a...

  • 0 kudos
2 More Replies
RantoB
by Valued Contributor
  • 7894 Views
  • 4 replies
  • 0 kudos

Resolved! SSLCertVerificationError how to disable SSL Certification

Hi, How is that possible to disable SSL Certification.With databricks API I got this error :SSLCertVerificationError   SSLCertVerificationError: ("hostname 'https' doesn't match either of '*.numericable.fr', 'numericable.fr'",)   MaxRetryError: HTTPS...

  • 7894 Views
  • 4 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Bertrand BURCKER​ - Thanks for letting us know your issue is resolved. If @Prabakar Ammeappin​'s answer solved the problem, would you be happy to mark his answer as best so others can more easily find an answer for this?

  • 0 kudos
3 More Replies
marsjuli
by New Contributor II
  • 18928 Views
  • 1 replies
  • 1 kudos

How to handle <IPython.core.display.HTML object>

Some libraries have intermediate IPython HTML-objects returned to the notebook cell output.Since this happens during training a machine learning model the statements are typically buried within in the library so I cannot easily interfere. (e.g. in or...

grafik.png
  • 18928 Views
  • 1 replies
  • 1 kudos
Latest Reply
marsjuli
New Contributor II
  • 1 kudos

Hi @Kaniz Fatma​ ,thanks for showing me the link. This helps if you are in control of the generated html-object. If the html-content comes from a library, that is where the problems start, because I cannot wrap displayHTML().(I can of course look for...

  • 1 kudos
Orianh
by Valued Contributor II
  • 4172 Views
  • 3 replies
  • 1 kudos

Train deep learning model with numpy arrays.

Hey guys,I'm trying to train deep learning model at ML databricks with numpy arrays as input.For now i organized all the data inside DF- df contains 4 columns : col1,col2,col3,col4col1 and col2 have arrays with shape (1,3,3,3,3), col 3 have array wit...

  • 4172 Views
  • 3 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 1 kudos

Maybe you could save some your code. It will be easier to answer and also we could learn deep learning in databricks from your code.

  • 1 kudos
2 More Replies
Sarvagna_Mahaka
by New Contributor III
  • 17627 Views
  • 6 replies
  • 8 kudos

Resolved! Exporting csv files from Databricks

I'm trying to export a csv file from my Databricks workspace to my laptop.I have followed the below steps. 1.Installed databricks CLI2. Generated Token in Azure Databricks3. databricks configure --token5. Token:xxxxxxxxxxxxxxxxxxxxxxxxxx6. databrick...

  • 17627 Views
  • 6 replies
  • 8 kudos
Latest Reply
User16871418122
Contributor III
  • 8 kudos

Hi @Sarvagna Mahakali​ There is an easier hack: a) You can save results locally on the disk and create a hyper link for downloading CSV . You can copy the file to this location: dbfs:/FileStore/table1_good_2020_12_18_07_07_19.csvb) Then download with...

  • 8 kudos
5 More Replies
DB_007
by New Contributor III
  • 8682 Views
  • 8 replies
  • 4 kudos

Resolved! Databricks SQL not displaying all the databases that i have on my cluster.

I have a cluster running on 7.3 LTS and it has about 35+ databases. When i tried to setup an endpoint on Databricks SQL, i do not see any database listed.

  • 8682 Views
  • 8 replies
  • 4 kudos
Latest Reply
User16871418122
Contributor III
  • 4 kudos

hi @Arif Ali​  You may have to check the data access config to add the params for external metastore: spark.hadoop.javax.jdo.option.ConnectionDriverName org.mariadb.jdbc.Driverspark.hadoop.javax.jdo.option.ConnectionUserName <mysql-username>spark.had...

  • 4 kudos
7 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels