cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

BradC26
by New Contributor III
  • 1227 Views
  • 2 replies
  • 0 kudos

Resolved! Can I change the experiment location for a model?

We are looking to move some of the artifacts to S3 from dbfs.

  • 1227 Views
  • 2 replies
  • 0 kudos
Latest Reply
sbo
New Contributor III
  • 0 kudos

Yes you can do it via the mlflow python API. However know that when you do that and move the experiment location to S3 the Databricks UI interface doesnt allow interaction with the artifacts etc. the mlflow python API is rich with the abilities to in...

  • 0 kudos
1 More Replies
ccavazos
by New Contributor III
  • 781 Views
  • 1 replies
  • 6 kudos

Really excited about Delta Sharing Cleanrooms!

Really excited about Delta Sharing Cleanrooms!

  • 781 Views
  • 1 replies
  • 6 kudos
Latest Reply
Withay
New Contributor II
  • 6 kudos

As am I! Between the clean rooms and the platform-agnostic data marketplace, collaborating on a variety of datasets will become much easier than before!

  • 6 kudos
abd
by Contributor
  • 6742 Views
  • 7 replies
  • 16 kudos

Resolved! What will happen if a driver or worker node fails?

What will happen if a driver node will fail?What will happen if one of the worker node fails?Is it same in Spark and Databricks or Databricks provide additional features to overcome these situations?

  • 6742 Views
  • 7 replies
  • 16 kudos
Latest Reply
Cedric
Databricks Employee
  • 16 kudos

If the driver node fails your cluster will fail. If the worker node fails, Databricks will spawn a new worker node to replace the failed node and resumes the workload. Generally it is recommended to assign a on-demand instance for your driver and spo...

  • 16 kudos
6 More Replies
Mayank
by New Contributor III
  • 11274 Views
  • 8 replies
  • 4 kudos

Resolved! Unable to load Parquet file using Autoloader. Can someone help?

I am trying to load parquet files using Autoloader. Below is the code def autoload_to_table (data_source, source_format, table_name, checkpoint_path): query = (spark.readStream .format('cloudFiles') .option('cl...

  • 11274 Views
  • 8 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi again @Mayank Srivastava​ Thank you so much for getting back to us and marking the answer as best.We really appreciate your time.Wish you a great Databricks journey ahead!

  • 4 kudos
7 More Replies
ekhool
by New Contributor
  • 602 Views
  • 0 replies
  • 0 kudos

 E-khool is an Online Learning Management System software for Industries with low cost and no extra charges. One stop solution for live classes, trai...

E-khool is an Online Learning Management System software for Industries with low cost and no extra charges. One stop solution for live classes, trainings, videos & materials, unlimited exams, corporate training & professional development, online int...

  • 602 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 1309 Views
  • 1 replies
  • 1 kudos

DeltaSharing

Hi, Databricks team,I created the share and provided access to the recipient, and the recipient is consuming or accessing the share. My question is who is going to bear the cost and how can we track the computation charges for it?As per the attached ...

  • 1309 Views
  • 1 replies
  • 1 kudos
Latest Reply
Phani1
Valued Contributor II
  • 1 kudos

Hi @Kaniz Fatma​ , Thanks for your response on Delta sharing information, Could you please provide details about who is going to bear the cost[either provider/Consumer] and how can we track the computation charges for it?

  • 1 kudos
EveryDayData
by Contributor
  • 14756 Views
  • 23 replies
  • 7 kudos

Resolved! Data AI Summit Training

Hi Team,I am asking a little off topic question here. I have been attending paid session for Data AI summit in last 2-3 years. Is it possible to get videos for sessions which we have not attended due to session conflicts at the same time. I mean to s...

  • 14756 Views
  • 23 replies
  • 7 kudos
Latest Reply
RajeshRK
Contributor II
  • 7 kudos

@Juliet Wu​ Hi Juliet,Thank you so much for sharing the recording. Regards,Rajesh.

  • 7 kudos
22 More Replies
Pete_M
by New Contributor II
  • 3030 Views
  • 1 replies
  • 2 kudos

Is it possible to alter the text in a Databricks Job Alert Email?

I have configured a Databricks job to send email alters to me whenever my job fails. However, I would very much like to alter the text in the Alert Email to something a little more bespoke. Is there any way to alter the text in email or even just t...

  • 3030 Views
  • 1 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hey there @Peter Mayers​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from yo...

  • 2 kudos
Eyespoop
by New Contributor II
  • 21829 Views
  • 3 replies
  • 2 kudos

Resolved! PySpark: Writing Parquet Files to the Azure Blob Storage Container

Currently I am having some issues with the writing of the parquet file in the Storage Container. I do have the codes running but whenever the dataframe writer puts the parquet to the blob storage instead of the parquet file type, it is created as a f...

image image(1) image(2)
  • 21829 Views
  • 3 replies
  • 2 kudos
Latest Reply
User16764241763
Honored Contributor
  • 2 kudos

Hello @Karl Saycon​ Can you try setting this config to prevent additional parquet summary and metadata files from being written? The result from dataframe write to storage should be a single file.https://community.databricks.com/s/question/0D53f00001...

  • 2 kudos
2 More Replies
Mohit_m
by Valued Contributor II
  • 11257 Views
  • 1 replies
  • 3 kudos

Resolved! How to Install Python packages from the own artifactory

We have created our own artifactory and we use this to install python dependencies or libraries.We would like to know how we can make use of our own artifactory to install dependencies or libraries on Databricks clusters..

  • 11257 Views
  • 1 replies
  • 3 kudos
Latest Reply
Mohit_m
Valued Contributor II
  • 3 kudos

For private repos, you can find some good examples herehttps://kb.databricks.com/clusters/install-private-pypi-repo.htmlhttps://towardsdatascience.com/install-custom-python-libraries-from-private-pypi-on-databricks-6a7669f6e6fd

  • 3 kudos
Anonymous
by Not applicable
  • 2231 Views
  • 4 replies
  • 5 kudos

Hello Databricks Community!  We are getting so excited about the upcoming event of the year Data & AI Summit!  If you still need to sign up, visit...

Hello Databricks Community! We are getting so excited about the upcoming event of the year Data & AI Summit! If you still need to sign up, visit our registration page [link]! We have an in-person and virtual option for attending this year. In prepara...

  • 2231 Views
  • 4 replies
  • 5 kudos
Latest Reply
lakshmidvm
New Contributor II
  • 5 kudos

me too, didn't receive giftcard.

  • 5 kudos
3 More Replies
MxSasch
by New Contributor II
  • 7843 Views
  • 11 replies
  • 5 kudos

Cluster terminated.Reason:Unexpected launch failure

Is there any known issues affecting the creation of clusters? I've been unable to get any clusters to start today so far!Have received this error "Cluster terminated.Reason:Unexpected launch failure"Help!

  • 7843 Views
  • 11 replies
  • 5 kudos
Latest Reply
AWe
New Contributor II
  • 5 kudos

good

  • 5 kudos
10 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels