cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Science & Machine Learning

Forum Posts

Kaizen
by Valued Contributor
  • 1251 Views
  • 2 replies
  • 0 kudos

Unity Catalog table management with multiple teams members

Hi! How are you guys managing large teams working on the same project. Each member has their own data to save in Unity Catalog.Based on my understanding there is only two ways to manage this:1) Create an individual member schea so they can store thei...

Kaizen_1-1712681311310.png
  • 1251 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaizen
Valued Contributor
  • 0 kudos

Any suggestions regarding this?@s_park , @Sujitha , @Debayan 

  • 0 kudos
1 More Replies
MinThuraZaw
by New Contributor III
  • 597 Views
  • 0 replies
  • 0 kudos

404 Page Not Found Error on Features page

We are facing this issue when accessing Features page. Our workspace is on AWS, ap-southeast-1.I think this is related to new feature for online tables and serverless. Is it because of online tables are not available yet in our region? If it not avai...

error2.png
  • 597 Views
  • 0 replies
  • 0 kudos
Kaizen
by Valued Contributor
  • 3268 Views
  • 5 replies
  • 1 kudos

Resolved! Endpoint performance questions

Hi! Had really interesting results from some endpoint performance tests I did. I set up the non-optimized endpoint with zero-cluster scaling and optimized had this feature disabled.1) Why does the non-optimized endpoint have variable response time fo...

Kaizen_1-1710196442817.png Kaizen_0-1710196408535.png Kaizen_2-1710196880601.png
  • 3268 Views
  • 5 replies
  • 1 kudos
Latest Reply
Kaizen
Valued Contributor
  • 1 kudos

Answering Q1: 1) The variable response time is due to the first endpoint response time requiring ~180 seconds to scale to 1 cluster from 02) Can i change zero scale time from the preset 30 min?

  • 1 kudos
4 More Replies
Nishat
by New Contributor
  • 984 Views
  • 0 replies
  • 0 kudos

Serving a custom transformer class via a pyfunc wrapper for a pyspark recommendation model

I am trying to serve an ALS pyspark model with a custom transformer(for generating user-specific recommendations) via a pyfunc wrapper. Although I can successfully score the logged model, the serving endpoint is throwing the following error.URI '/mod...

  • 984 Views
  • 0 replies
  • 0 kudos
marcelo2108
by Contributor
  • 19687 Views
  • 25 replies
  • 0 kudos

Problem when serving a langchain model on Databricks

I´m trying to model serving a LLM LangChain Model and every time it fails with this messsage:[6b6448zjll] [2024-02-06 14:09:55 +0000] [1146] [INFO] Booting worker with pid: 1146[6b6448zjll] An error occurred while loading the model. You haven't confi...

  • 19687 Views
  • 25 replies
  • 0 kudos
Latest Reply
marcelo2108
Contributor
  • 0 kudos

Hi @DataWrangler and Team.I got to solve the initial problem from some tips you gave. I used your code as base and did some modifications adapted to what I have, I mean , No UC enabled and not able to use DatabricksEmbeddings, DatabricksVectorSearch ...

  • 0 kudos
24 More Replies
mbejarano89
by New Contributor III
  • 8265 Views
  • 2 replies
  • 2 kudos

Resolved! Running multiple linear regressions in parallel (speeding up for loop)

Hi, I am running several linear regressions on my dataframe, in which I run a regression for every unique value in the column "item" , apply the model to a new dataset (vector_new), and at the end union the results as the loop runs. The problem is th...

  • 8265 Views
  • 2 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@Marcela Bejarano​ :One approach to speed up the process is to avoid using a loop and instead use Spark's groupBy and map functions. Here is an example:from pyspark.ml import Pipeline from pyspark.ml.feature import VectorAssembler from pyspark.ml.reg...

  • 2 kudos
1 More Replies
DataInsight
by New Contributor II
  • 1223 Views
  • 1 replies
  • 0 kudos

Copy Into command to copy into delta table with predefined schema and csv file has no headers

How do i use copy into command to load 200+ tables with 50+ columns into a delta lake table with predefined schema. I am looking for a more generic approach to be handled in pyspark code.I am aware that we can pass the column expression into the sele...

  • 1223 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Does your source data have same number of columns as your target Delta tables? In that case, you can do it this way:COPY INTO my_pipe_dataFROM 's3://my-bucket/pipeData'FILEFORMAT = CSVFORMAT_OPTIONS ('mergeSchema' = 'true','delimiter' = '|','header' ...

  • 0 kudos
aladda
by Databricks Employee
  • 3823 Views
  • 2 replies
  • 0 kudos

Resolved! How do I use the Copy Into command to copy data into a Delta Table? Looking for examples where you want to have a pre-defined schema

I've reviewed the COPY INTO docs here - https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-copy-into.html#examples but there's only one simple example. Looking for some additional examples that show loading data from CSV - with ...

  • 3823 Views
  • 2 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

Here's an example for predefined schemaUsing COPY INTO with a predefined table schema – Trick here is to CAST the CSV dataset into your desired schema in the select statement of COPY INTO. Example below%sql CREATE OR REPLACE TABLE copy_into_bronze_te...

  • 0 kudos
1 More Replies
Abdurrahman
by New Contributor II
  • 4342 Views
  • 1 replies
  • 0 kudos

How to download a pytorch model created via notebook and saved in a folder ?

I have created a pytorch model using databricks notebooks and saved it in a folder in workspace. MLFlow is not used.When I try to download the files from the folder it exceeds the download limit. Is there a way to download the model locally into my s...

  • 4342 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
I have created a pytorch model using databricks notebooks and saved it in a folder in workspace. MLFlow is not used.When I try to download the files from the folder it exceeds the download limit. Is there a way to download the model locally into my s...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
BogdanV
by New Contributor III
  • 2851 Views
  • 1 replies
  • 0 kudos

Resolved! Query ML Endpoint with R and Curl

I am trying to get a prediction by querying the ML Endpoint on Azure Databricks with R. I'm not sure what is the format of the expected data. Is there any other problem with this code? Thanks!!! 

R Code.png
  • 2851 Views
  • 1 replies
  • 0 kudos
Latest Reply
BogdanV
New Contributor III
  • 0 kudos

Hi Kaniz, I was able to find the solution. You should post this in the examples when you click "Query Endpoint"You only have code for Browser, Curl, Python, SQL. You should add a tab for RHere is the solution:library(httr)url <- "https://adb-********...

  • 0 kudos
VJ3
by New Contributor III
  • 1862 Views
  • 1 replies
  • 0 kudos

Security Controls to implement on Machine Learning Persona

Hello,Hope everyone are doing well. You may be aware that we are using Table ACL enabled cluster to ensure the adequate security controls on Databricks. You may be also aware that we can not use Table enabled ACL cluster on Machine Learning Persona. ...

  • 1862 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hello,Hope everyone are doing well. You may be aware that we are using Table ACL enabled cluster to ensure the adequate security controls on Databricks. You may be also aware that we can not use Table enabled ACL cluster on Machine Learning Persona. ...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
larsr
by New Contributor II
  • 1486 Views
  • 0 replies
  • 0 kudos

DBR CLI v0.216.0 failed to pass bundle variable for notebook task

After installing the new version of the CLI (v0.216.0) the bundle variable for the notebook task is not parsed correctly, see code below:tasks:        - task_key: notebook_task          job_cluster_key: job_cluster          notebook_task:            ...

Machine Learning
asset bundles
  • 1486 Views
  • 0 replies
  • 0 kudos
G-M
by Contributor
  • 1969 Views
  • 0 replies
  • 1 kudos

MLflow Experiments in Unity Catalog

Will MLflow Experiments be incorporated into Unity Catalog similar to models and feature tables? I feel like this is the final piece missing in a comprehensive Unity Catalog backed MLOps workflow. Currently it seems they can only be stored in a dbfs ...

  • 1969 Views
  • 0 replies
  • 1 kudos
johnp
by New Contributor III
  • 4112 Views
  • 1 replies
  • 0 kudos

pdb debugger on databricks

I am new to databricks. and trying to debug my python application with variable-explore by following the instruction from: https://www.databricks.com/blog/new-debugging-features-databricks-notebooks-variable-explorerI added the "import pdb" in the fi...

  • 4112 Views
  • 1 replies
  • 0 kudos
Latest Reply
johnp
New Contributor III
  • 0 kudos

I test with some simple applications, it works as you described.  However, the application I am debugging uses the pyspark structured streaming, which runs continuously. After inserting pdb.set_trace(), the application paused at the breakpoint, but t...

  • 0 kudos
Mesh
by New Contributor II
  • 6867 Views
  • 1 replies
  • 0 kudos

Optimizing for Recall in Azure AutoML UI

Hi all, I've been using Azure AutoML and noticed that I can choose 'recall' as my optimization metric in the notebook but not in the Azure AutoML UI. The Databricks documentation also doesn't list 'recall' as an optimization metric.Is there a reason ...

  • 6867 Views
  • 1 replies
  • 0 kudos
Latest Reply
Mesh
New Contributor II
  • 0 kudos

On the databricks notebook itself, I can see that databricks.automl supports using recall as a primary metric Help on function classify in module databricks.automl: :param primary_metric: primary metric to select the best model. Each trial will...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels