cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

marcelo2108
by Contributor
  • 32496 Views
  • 25 replies
  • 0 kudos

Problem when serving a langchain model on Databricks

I´m trying to model serving a LLM LangChain Model and every time it fails with this messsage:[6b6448zjll] [2024-02-06 14:09:55 +0000] [1146] [INFO] Booting worker with pid: 1146[6b6448zjll] An error occurred while loading the model. You haven't confi...

  • 32496 Views
  • 25 replies
  • 0 kudos
Latest Reply
marcelo2108
Contributor
  • 0 kudos

Hi @DataWrangler and Team.I got to solve the initial problem from some tips you gave. I used your code as base and did some modifications adapted to what I have, I mean , No UC enabled and not able to use DatabricksEmbeddings, DatabricksVectorSearch ...

  • 0 kudos
24 More Replies
mbejarano89
by New Contributor III
  • 9420 Views
  • 2 replies
  • 2 kudos

Resolved! Running multiple linear regressions in parallel (speeding up for loop)

Hi, I am running several linear regressions on my dataframe, in which I run a regression for every unique value in the column "item" , apply the model to a new dataset (vector_new), and at the end union the results as the loop runs. The problem is th...

  • 9420 Views
  • 2 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@Marcela Bejarano​ :One approach to speed up the process is to avoid using a loop and instead use Spark's groupBy and map functions. Here is an example:from pyspark.ml import Pipeline from pyspark.ml.feature import VectorAssembler from pyspark.ml.reg...

  • 2 kudos
1 More Replies
DataInsight
by New Contributor II
  • 1554 Views
  • 1 replies
  • 0 kudos

Copy Into command to copy into delta table with predefined schema and csv file has no headers

How do i use copy into command to load 200+ tables with 50+ columns into a delta lake table with predefined schema. I am looking for a more generic approach to be handled in pyspark code.I am aware that we can pass the column expression into the sele...

  • 1554 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Does your source data have same number of columns as your target Delta tables? In that case, you can do it this way:COPY INTO my_pipe_dataFROM 's3://my-bucket/pipeData'FILEFORMAT = CSVFORMAT_OPTIONS ('mergeSchema' = 'true','delimiter' = '|','header' ...

  • 0 kudos
aladda
by Databricks Employee
  • 4665 Views
  • 2 replies
  • 1 kudos

Resolved! How do I use the Copy Into command to copy data into a Delta Table? Looking for examples where you want to have a pre-defined schema

I've reviewed the COPY INTO docs here - https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-copy-into.html#examples but there's only one simple example. Looking for some additional examples that show loading data from CSV - with ...

  • 4665 Views
  • 2 replies
  • 1 kudos
Latest Reply
aladda
Databricks Employee
  • 1 kudos

Here's an example for predefined schemaUsing COPY INTO with a predefined table schema – Trick here is to CAST the CSV dataset into your desired schema in the select statement of COPY INTO. Example below%sql CREATE OR REPLACE TABLE copy_into_bronze_te...

  • 1 kudos
1 More Replies
Abdurrahman
by New Contributor II
  • 5815 Views
  • 1 replies
  • 0 kudos

How to download a pytorch model created via notebook and saved in a folder ?

I have created a pytorch model using databricks notebooks and saved it in a folder in workspace. MLFlow is not used.When I try to download the files from the folder it exceeds the download limit. Is there a way to download the model locally into my s...

  • 5815 Views
  • 1 replies
  • 0 kudos
BogdanV
by New Contributor III
  • 3389 Views
  • 1 replies
  • 0 kudos

Resolved! Query ML Endpoint with R and Curl

I am trying to get a prediction by querying the ML Endpoint on Azure Databricks with R. I'm not sure what is the format of the expected data. Is there any other problem with this code? Thanks!!! 

R Code.png
  • 3389 Views
  • 1 replies
  • 0 kudos
Latest Reply
BogdanV
New Contributor III
  • 0 kudos

Hi Kaniz, I was able to find the solution. You should post this in the examples when you click "Query Endpoint"You only have code for Browser, Curl, Python, SQL. You should add a tab for RHere is the solution:library(httr)url <- "https://adb-********...

  • 0 kudos
larsr
by New Contributor II
  • 1710 Views
  • 0 replies
  • 0 kudos

DBR CLI v0.216.0 failed to pass bundle variable for notebook task

After installing the new version of the CLI (v0.216.0) the bundle variable for the notebook task is not parsed correctly, see code below:tasks:        - task_key: notebook_task          job_cluster_key: job_cluster          notebook_task:            ...

Machine Learning
asset bundles
  • 1710 Views
  • 0 replies
  • 0 kudos
G-M
by Contributor
  • 2432 Views
  • 0 replies
  • 1 kudos

MLflow Experiments in Unity Catalog

Will MLflow Experiments be incorporated into Unity Catalog similar to models and feature tables? I feel like this is the final piece missing in a comprehensive Unity Catalog backed MLOps workflow. Currently it seems they can only be stored in a dbfs ...

  • 2432 Views
  • 0 replies
  • 1 kudos
johnp
by New Contributor III
  • 5021 Views
  • 1 replies
  • 0 kudos

pdb debugger on databricks

I am new to databricks. and trying to debug my python application with variable-explore by following the instruction from: https://www.databricks.com/blog/new-debugging-features-databricks-notebooks-variable-explorerI added the "import pdb" in the fi...

  • 5021 Views
  • 1 replies
  • 0 kudos
Latest Reply
johnp
New Contributor III
  • 0 kudos

I test with some simple applications, it works as you described.  However, the application I am debugging uses the pyspark structured streaming, which runs continuously. After inserting pdb.set_trace(), the application paused at the breakpoint, but t...

  • 0 kudos
Mesh
by New Contributor II
  • 7401 Views
  • 1 replies
  • 0 kudos

Optimizing for Recall in Azure AutoML UI

Hi all, I've been using Azure AutoML and noticed that I can choose 'recall' as my optimization metric in the notebook but not in the Azure AutoML UI. The Databricks documentation also doesn't list 'recall' as an optimization metric.Is there a reason ...

  • 7401 Views
  • 1 replies
  • 0 kudos
Latest Reply
Mesh
New Contributor II
  • 0 kudos

On the databricks notebook itself, I can see that databricks.automl supports using recall as a primary metric Help on function classify in module databricks.automl: :param primary_metric: primary metric to select the best model. Each trial will...

  • 0 kudos
kng88
by New Contributor II
  • 6397 Views
  • 6 replies
  • 7 kudos

How to save model produce by distributed training?

I am trying to save model after distributed training via the following codeimport sys   from spark_tensorflow_distributor import MirroredStrategyRunner   import mlflow.keras   mlflow.keras.autolog()   mlflow.log_param("learning_rate", 0.001)   import...

  • 6397 Views
  • 6 replies
  • 7 kudos
Latest Reply
Xiaowei
New Contributor III
  • 7 kudos

I think I finally worked this out.Here is the extra code to save out the model only once and from the 1st node:context = pyspark.BarrierTaskContext.get() if context.partitionId() == 0: mlflow.keras.log_model(model, "mymodel")

  • 7 kudos
5 More Replies
yorabhir
by New Contributor III
  • 2719 Views
  • 0 replies
  • 0 kudos

'error_code': 'INVALID_PARAMETER_VALUE', 'message': 'Too many sources. It cannot be more than 100'

I am getting the following error while saving a delta table in the feature storeWARNING databricks.feature_store._catalog_client_helper: Failed to record data sources in the catalog. Exception: {'error_code': 'INVALID_PARAMETER_VALUE', 'message': 'To...

  • 2719 Views
  • 0 replies
  • 0 kudos
Mirko
by Contributor
  • 3462 Views
  • 2 replies
  • 1 kudos

AutoMl Dataset too large

Hello community,i have the following problem: I am using automl to solve a regression model, but in the preprocessing my dataset is sampled to ~30% of the original amount.I am using runtime 14.2 ML Driver: Standard_DS4_v2 28GB Memory 8 coresWorker: S...

  • 3462 Views
  • 2 replies
  • 1 kudos
Latest Reply
Mirko
Contributor
  • 1 kudos

I am pretty sure that i know what the problem was. I had a timestamp column (with second precision) as a feature. If they get one hot encoded, the dataset can get pretty large.

  • 1 kudos
1 More Replies
Miki
by New Contributor II
  • 2552 Views
  • 2 replies
  • 0 kudos

Error: batch scoring with mlflow.keras flavor model

I am logging a trained keras model using the following:  fe.log_model( model=model, artifact_path="wine_quality_prediction", flavor= mlflow.keras, training_set=training_set, registered_model_name=model_name )And when I call the following:predictions_...

Machine Learning
FeatureEngineeringClient
keras
mlflow
  • 2552 Views
  • 2 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels