cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Science & Machine Learning

Forum Posts

rendorHaevyn
by New Contributor III
  • 1644 Views
  • 4 replies
  • 0 kudos

Resolved! History of code executed on Data Science & Engineering service clusters

I want to be able to view a listing of any or all of the following:When Notebooks were attached / detached to and from a DS&E clusterWhen Notebook code was executed on a DS&E clusterWhat Notebook specific cell code was executed on a DS&E clusterIs th...

  • 1644 Views
  • 4 replies
  • 0 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 0 kudos

From the UI https://docs.databricks.com/notebooks/notebooks-code.html#version-control best way to check is version control.BTW, do you see this helps https://www.databricks.com/blog/2022/11/02/monitoring-notebook-command-logs-static-analysis-tools.ht...

  • 0 kudos
3 More Replies
Saeid_H
by Contributor
  • 7776 Views
  • 6 replies
  • 5 kudos

Register mlflow custom model, which has pickle files

Dear community,I want to basically store 2 pickle files during the training and model registry with my keras model. So that when I access the model from another workspace (using mlflow.set_registery_uri()) , these models can be accessed as well. The ...

  • 7776 Views
  • 6 replies
  • 5 kudos
Latest Reply
arzex
New Contributor II
  • 5 kudos

آموزش تولید محتوا

  • 5 kudos
5 More Replies
Erik_S
by New Contributor II
  • 1869 Views
  • 4 replies
  • 2 kudos

Can I run a custom function that contains a trained ML model or access an API endpoint from within a SQL query in the SQL workspace?

I have a dashboard and I'd like the ability to take the data from a query and then predict a result from a trained ML model within the dashboard. I was thinking I could possibly embed the trained model within a library that I then import to the SQL w...

  • 1869 Views
  • 4 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @Erik Shilts​ (Customer)​, We haven't heard from you since the last response from @Suteja Kanuri​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be hel...

  • 2 kudos
3 More Replies
Orianh
by Valued Contributor II
  • 980 Views
  • 2 replies
  • 0 kudos

TF SummaryWriter flush() don't send any buffered data to storage.

Hey guys, I'm training a TF model in databricks, and logging to tensorboard using SummaryWriter. At the end of each epoch SummaryWriter.flush() is called which should send any buffered data into storage. But i can't see the tensorboard files while th...

  • 980 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @orian hindi​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so w...

  • 0 kudos
1 More Replies
Eero_H
by New Contributor
  • 1885 Views
  • 2 replies
  • 1 kudos

Is there a way to change the default artifact store path on Databricks Mlflow?

I have a cloud storage mounted to Databricks and I would like to store all of the model artifacts there without specifying it when creating a new experiment.Is there a way to configure the Databricks workspace to save all of the model artifacts to a ...

  • 1885 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Eero Hiltunen​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feedbac...

  • 1 kudos
1 More Replies
Kaan
by New Contributor
  • 1445 Views
  • 1 replies
  • 1 kudos

Resolved! Using databricks in multi-cloud, and querying data from the same instance.

I'm looking for a good product to use across two clouds at once for Data Engineering, Data modeling and governance. I currently have a GCP platform, but most of my data and future data goes through Azure, and currently is then transfered to GCS/BQ.Cu...

  • 1445 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

@Karl Andrén​ :Databricks is a great option for data engineering, data modeling, and governance across multiple clouds. It supports integrations with multiple cloud providers, including Azure, AWS, and GCP, and provides a unified interface to access ...

  • 1 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 725 Views
  • 1 replies
  • 7 kudos

Have you heard about databricks latest open-source language model called Dolly? It’s a ChatGPT like model that uses the tatsu-lab/alpaca dataset with ...

Have you heard about databricks latest open-source language model called Dolly? It’s a ChatGPT like model that uses the tatsu-lab/alpaca dataset with examples of questions and answers. To train Dolly, you can combine this dataset (simple solution on ...

Screenshot 2023-03-26 215509
  • 725 Views
  • 1 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

Thanks for posting this! I am so excited about the possibilities that this can do for us. It's an exciting development in the natural language processing field, and it has the potential to be a valuable tool for businesses looking to implement chatb...

  • 7 kudos
alisher_pwc
by New Contributor II
  • 1991 Views
  • 2 replies
  • 1 kudos

Model serving with GPU cluster

Hello Databricks community!We are facing a strong need of serving some of public and our private models on GPU clusters and we have several requirements:1) We'd like to be able to start/stop the endpoints (best with scheduling) to avoid excess consum...

  • 1991 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vartika
Moderator
  • 1 kudos

Hi @Alisher Akh​ Does @Debayan Mukherjee​'s answer help? If yes, would you be happy to mark the answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you further. Cheers!

  • 1 kudos
1 More Replies
fuselessmatt
by Contributor
  • 9344 Views
  • 5 replies
  • 6 kudos

Resolved! What does "Command exited with code 50 mean" and how do you solve it?

Hi!We have this dbt model that generates a table with user activity in the previous days, but we get this vague error message in the Databricks SQL Warehouse.Job aborted due to stage failure: Task 3 in stage 4267.0 failed 4 times, most recent failure...

  • 9344 Views
  • 5 replies
  • 6 kudos
Latest Reply
shan_chandra
Esteemed Contributor
  • 6 kudos

@Mattias P​  - For the executor lost failure, is it trying to bring in large data volume? can you please reduce the date range and try? or run the workload on a bigger DBSQL warehouse than the current one.

  • 6 kudos
4 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 1202 Views
  • 2 replies
  • 5 kudos

Share information between tasks in a Databricks job  You can use task values to pass arbitrary parameters between tasks in a Databricks job. You pass ...

Share information between tasks in a Databricks jobYou can use task values to pass arbitrary parameters between tasks in a Databricks job. You pass task values using the taskValues subutility in Databricks Utilities. The taskValues subutility provide...

  • 1202 Views
  • 2 replies
  • 5 kudos
Latest Reply
newforesee
New Contributor II
  • 5 kudos

We urgently hope for this feature, but to date, we have found that it is only available in Python. Do you have any plans to support Scala?

  • 5 kudos
1 More Replies
apatel
by New Contributor III
  • 6162 Views
  • 2 replies
  • 0 kudos

Resolved! How to resolve this error "Error: cannot create global init script: default auth: cannot configure default credentials"

I'm trying to set the global init script via my Terraform deployment. I did a thorough google search and can't seem to find guidance here.I'm using a very generic call to set these scripts in my TF Deployment.terraform { required_providers { data...

  • 6162 Views
  • 2 replies
  • 0 kudos
Latest Reply
apatel
New Contributor III
  • 0 kudos

Ok in case this helps anyone else, I've managed to resolve.I confirmed in this documentation the databricks CLI is required locally, wherever this is being executed. https://learn.microsoft.com/en-us/azure/databricks/dev-tools/terraform/cluster-note...

  • 0 kudos
1 More Replies
Koliya
by New Contributor II
  • 13192 Views
  • 5 replies
  • 7 kudos

The Python process exited with exit code 137 (SIGKILL: Killed). This may have been caused by an OOM error. Check your command's memory usage.

I am running a hugging face model on a GPU cluster (g4dn.xlarge, 16GB Memory, 4 cores). I run the same model in four different notebooks with different data sources. I created a workflow to run one model after the other. These notebooks run fine indi...

  • 13192 Views
  • 5 replies
  • 7 kudos
Latest Reply
fkemeth
New Contributor II
  • 7 kudos

You might accumulate gradients when running your Huggingface model, which typically leads to out-of-memory errors after some iterations. If you use it for inference only, dowith torch.no_grad(): # The code where you apply the model

  • 7 kudos
4 More Replies
Tilo
by New Contributor
  • 2378 Views
  • 3 replies
  • 3 kudos

Resolved! MLFlow: How to load results from model and continue training

I'd like to continue / finetune training of an existing keras/tensorflow model. We use MLFlow to store the model. How can I load the wieght from an existing model to the model and continue "fit" preferable with a different learning rate.Just loading ...

  • 2378 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Tilo Wünsche​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thank...

  • 3 kudos
2 More Replies
NSRBX
by Contributor
  • 3274 Views
  • 6 replies
  • 6 kudos

Resolved! Error loading model from mlflow: java.io.StreamCorruptedException: invalid type code: 00

Hello,I'm using, in my IDE, Databricks Connect version 9.1LTS ML to connect to a databricks cluster with spark version 3.1 and download a spark model that's been trained and saved using mlflow.So it seems like it's able to find a copy the model, but ...

  • 3274 Views
  • 6 replies
  • 6 kudos
Latest Reply
NSRBX
Contributor
  • 6 kudos

Hi @Kaniz Fatma​ and @Shanmugavel Chandrakasu​,It works after putting hadoop.dll into C:\Windows\System32 folder.I have hadoop version 3.3.1.I already had winutils.exe in the Hadoop bin folder.RegardsNath

  • 6 kudos
5 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels