- 4351 Views
- 4 replies
- 2 kudos
Uploaded Docker image into cluster. Used cluster for MLFlow experiment, but no experiment is logged/there are no experiment runs. Why is this?
Hi! So I used this MLFlow experiment I found from the databricks website: https://docs.databricks.com/_static/notebooks/machine-learning-with-unity-catalog.htmlAnd I created this cluster using a custom Docker image I created myself: Usually when I c...
- 4351 Views
- 4 replies
- 2 kudos
- 2 kudos
Have you tried the steps mentioned in the below URL:https://docs.databricks.com/clusters/custom-containers.html#step-3-launch-your-cluster
- 2 kudos
- 3056 Views
- 7 replies
- 6 kudos
Why this Databricks ML code gets stuck?
I could not paste the code here because of the some word not allowed, so I have to paste it elsewhere.Below is OK:https://justpaste.it/8xcr9But below gets stuck:https://justpaste.it/8nydtand it keeps looping and running...
- 3056 Views
- 7 replies
- 6 kudos
- 6 kudos
Hey @THIAM HUAT TAN​ Hope all is well! Just wanted to check in if you were able to resolve your issue, and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you....
- 6 kudos
- 1495 Views
- 0 replies
- 0 kudos
MLflow model serving: KeyError: 'python_function'
Hello, I am training a logistic regression on text with the help of an tf-idf vectorizer.This is done with MLflow and sklearn in databricks.The model itself is trained successfully in databricks and it is possible to accomplish predictions within the...
- 1495 Views
- 0 replies
- 0 kudos
- 2668 Views
- 4 replies
- 0 kudos
Why is there a limit in /2.1/jobs/list?
I detected that there ist a limit of 25 in /2.1/jobs/list. While from what i know /2.0/jobs/list had no limit? Why is this the case? Is it planned to increase the limit at some point?I know that the offset concept exist, but from my standpoint that i...
- 2668 Views
- 4 replies
- 0 kudos
- 0 kudos
Jobs API 2.1 jobs list responses will be capped at a limit of 25. With the introduction of pagination in Jobs API 2.1, and to stay in-line with providing increased stability, a limit was introduced on the amount Jobs API 2.1 jobslist responses.
- 0 kudos
- 2215 Views
- 2 replies
- 0 kudos
Unable to create model version using rest api on Managed MLFlow on GCP. Getting a Failed Registration.
I am trying to use Managed MLFlow as tracking server on GCP. I use rest apis to connect with the MLFLOW using Databricks token.I can create experiment and even the model but what when I try to create a model version I run into this following error. ...
- 2215 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @Shounak Roychowdhury​, Just a friendly follow-up. Do you still need help or you were able to find the solution to this question? please let us know
- 0 kudos
- 2558 Views
- 4 replies
- 2 kudos
Save VM cost when using Rest API deploying models for online inference
ADB allows us to deploy the models for online inference through a REST API. To that aim ADB creates a VM dedicated to serve a specific model. Data Scientist can create and deploy several models for testing online inference, thus the cost can rapidly ...
- 2558 Views
- 4 replies
- 2 kudos

- 2 kudos
Hey @John Wilmar Herrera Gil​ Thank you so much for getting back to us. We really appreciate your time.Wish you a great Databricks journey ahead!
- 2 kudos
- 5436 Views
- 4 replies
- 5 kudos
Submitting multiple parallel jobs to the same job cluster causes Azure vCPU quota manager to count the clusters vCPUs on each invocation
I have an ADF pipeline which invokes a Databricks job six times in parallel. My assumption is all jobs get routed to the same job cluster which then deals with all the invocations in parallel. This was working fine when I had five sources, when I add...
- 5436 Views
- 4 replies
- 5 kudos
- 16486 Views
- 1 replies
- 5 kudos
Resolved! ingest a .csv file with spaces in column names using Delta Live into a streaming table
How do I ingest a .csv file with spaces in column names using Delta Live into a streaming table? All of the fields should be read using the default behavior .csv files for DLT autoloader - as strings. Running the pipeline gives me an error about in...
- 16486 Views
- 1 replies
- 5 kudos
- 5 kudos
After additional googling on "withColumnRenamed", I was able to replace all spaces in column names with "_" all at once by using select and alias instead:@dlt.view( comment="" ) def vw_raw(): return ( spark.readStream.format("cloudF...
- 5 kudos
- 1704 Views
- 1 replies
- 3 kudos
Feature Store - Feature Lookup with Filter
I am working with feature store to save the engineered features. However, for the specific case we have lots of feature table and lot of separate target variables on which we want to train separate models. Now for each of these model, we can leverage...
- 1704 Views
- 1 replies
- 3 kudos

- 3 kudos
Thanks for taking the time to let us know how to make Databricks even better! @Mayank Srivastava​ I love that you included a real-life example as well. I think I know the right PM at Databricks that will be interested in this input. Thanks again for...
- 3 kudos
- 1158 Views
- 1 replies
- 0 kudos
hi Team, I am facing an issue when deploying the databricks model into AWS Sagemaker. Kindly check the below error and advice me on this. Traceback (...
hi Team, I am facing an issue when deploying the databricks model into AWS Sagemaker. Kindly check the below error and advice me on this.Traceback (most recent call last): File "<string>", line 1, in <module> File "/miniconda/lib/python3.9/site-pack...
- 1158 Views
- 1 replies
- 0 kudos
- 851 Views
- 0 replies
- 2 kudos
Unity Catalog Webinar: Join us to learn what's new, and what’s coming in Unity Catalog Governance for Data and AI is complex. Databricks Unity Cat...
Unity Catalog Webinar: Join us to learn what's new, and what’s coming in Unity CatalogGovernance for Data and AI is complex. Databricks Unity Catalog provides a unified governance solution for all data and AI assets on any cloud, empowering data team...
- 851 Views
- 0 replies
- 2 kudos
- 2340 Views
- 0 replies
- 0 kudos
How to identify S3 object type (directory or file) created by Databricks?
The issue context is Delta Lake connector in Trino https://github.com/trinodb/trino/issues/13017Trino identifies S3 object as a directory or a file using Content-Type header. Other query engines set application/x-directory in case of directories, bu...
- 2340 Views
- 0 replies
- 0 kudos
- 2611 Views
- 1 replies
- 2 kudos
Resolved! Store a secret only accessible to the current user
During an interactive notebook session, I want a user to be able to retrieve a secret specific to that user. I haven't decided on storage mechanisms, but I'm open to storage mechanisms that can scalably authorize access to a single user and that I ca...
- 2611 Views
- 1 replies
- 2 kudos
- 2 kudos
I ended up using Databricks Secrets as the storage mechanism after learning from my account rep that the limit is soft and we can request a higher scope limit. In this case, each user gets a dedicated scope and no other users have access.
- 2 kudos
- 3355 Views
- 4 replies
- 1 kudos
Resolved! ML Practioner | ml 09 - automl notebook | error on importing databricks.automl
executing the following code...from databricks import automlsummary = automl.regress(train_df, target_col="price", primary_metric="rmse", timeout_minutes=5, max_trials=10)generates the error...ImportError: cannot import name 'automl' from 'databricks...
- 3355 Views
- 4 replies
- 1 kudos
- 1 kudos
I'm happy to see a particularly subject.
- 1 kudos
- 2316 Views
- 2 replies
- 1 kudos
How to input initial centroids to K-Means or GMM Clustering in SparkML ?
Hi, I want to use KMeans Model or Gaussian Mixture Model algorithm for clustering using the SparkML library, in which I want to specify the initial centroids. The option of giving initial centroids is there in the Spark MLlib (RDD based APIs) however...
- 2316 Views
- 2 replies
- 1 kudos
- 1 kudos
@Kaniz Fatma​ I still haven't got an answer to my question!!!
- 1 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
3 -
Access Data
2 -
AccessKeyVault
1 -
ADB
2 -
Airflow
1 -
Amazon
2 -
Apache
1 -
Apache spark
3 -
APILimit
1 -
Artifacts
1 -
Audit
1 -
Autoloader
6 -
Autologging
2 -
Automation
2 -
Automl
32 -
AWS
7 -
Aws databricks
1 -
AWSSagemaker
1 -
Azure
32 -
Azure active directory
1 -
Azure blob storage
2 -
Azure data lake
1 -
Azure Data Lake Storage
3 -
Azure data lake store
1 -
Azure databricks
32 -
Azure event hub
1 -
Azure key vault
1 -
Azure sql database
1 -
Azure Storage
2 -
Azure synapse
1 -
Azure Unity Catalog
1 -
Azure vm
1 -
AzureML
2 -
Bar
1 -
Beta
1 -
Better Way
1 -
BI Integrations
1 -
BI Tool
1 -
Billing and Cost Management
1 -
Blob
1 -
Blog
1 -
Blog Post
1 -
Broadcast variable
1 -
Business Intelligence
1 -
CatalogDDL
1 -
CatalogPricing
1 -
Centralized Model Registry
1 -
Certification
2 -
Certification Badge
1 -
Change
1 -
Change Logs
1 -
Chatgpt
2 -
Check
2 -
CHUNK
1 -
Classification Model
1 -
Cloud Storage
1 -
Cluster
10 -
Cluster policy
1 -
Cluster Start
1 -
Cluster Tags
1 -
Cluster Termination
2 -
Clustering
1 -
ClusterMemory
1 -
CNN HOF
1 -
Column names
1 -
Community Edition
1 -
Community Edition Password
1 -
Community Members
1 -
Company Email
1 -
Concat Ws
1 -
Conda
1 -
Condition
1 -
Config
1 -
Configure
3 -
Confluent Cloud
1 -
Container
2 -
ContainerServices
1 -
Control Plane
1 -
ControlPlane
1 -
Copy
1 -
Copy into
2 -
CosmosDB
1 -
Cost
1 -
Courses
2 -
Csv files
1 -
Dashboards
1 -
Data
8 -
Data Engineer Associate
1 -
Data Engineer Certification
1 -
Data Explorer
1 -
Data Ingestion
2 -
Data Ingestion & connectivity
11 -
Data Quality
1 -
Data Quality Checks
1 -
Data Science & Engineering
2 -
databricks
5 -
Databricks Academy
3 -
Databricks Account
1 -
Databricks AutoML
9 -
Databricks Cluster
3 -
Databricks Community
5 -
Databricks community edition
4 -
Databricks connect
1 -
Databricks dbfs
1 -
Databricks Feature Store
1 -
Databricks Job
1 -
Databricks Lakehouse
1 -
Databricks Mlflow
4 -
Databricks Model
2 -
Databricks notebook
10 -
Databricks ODBC
1 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Runtime
9 -
Databricks SQL
8 -
Databricks SQL Permission Problems
1 -
Databricks Terraform
1 -
Databricks Training
2 -
Databricks Unity Catalog
1 -
Databricks V2
1 -
Databricks version
1 -
Databricks Workflow
2 -
Databricks Workflows
1 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksContainer
1 -
DatabricksML
6 -
Dataframe
3 -
DataSharing
1 -
Datatype
1 -
DataVersioning
1 -
Date Column
1 -
Dateadd
1 -
DB Notebook
1 -
DB Runtime
1 -
DBFS
5 -
DBFS Rest Api
1 -
Dbt
1 -
Dbu
1 -
DDL
1 -
DDP
1 -
Dear Community
1 -
DecisionTree
1 -
Deep learning
4 -
Default Location
1 -
Delete
1 -
Delt Lake
4 -
Delta
24 -
Delta lake table
1 -
Delta Live
1 -
Delta Live Tables
6 -
Delta log
1 -
Delta Sharing
3 -
Delta-lake
1 -
Deploy
1 -
DESC
1 -
Details
1 -
Dev
1 -
Devops
1 -
Df
1 -
Different Notebook
1 -
Different Parameters
1 -
DimensionTables
1 -
Directory
3 -
Disable
1 -
Distribution
1 -
DLT
6 -
DLT Pipeline
3 -
Dolly
5 -
Dolly Demo
2 -
Download
2 -
EC2
1 -
Emr
2 -
Ensemble Models
1 -
Environment Variable
1 -
Epoch
1 -
Error handling
1 -
Error log
2 -
Eventhub
1 -
Example
1 -
Experiments
4 -
External Sources
1 -
Extract
1 -
Fact Tables
1 -
Failure
2 -
Feature Lookup
2 -
Feature Store
52 -
Feature Store API
2 -
Feature Store Table
1 -
Feature Table
6 -
Feature Tables
4 -
Features
2 -
FeatureStore
2 -
File Path
2 -
File Size
1 -
Fine Tune Spark Jobs
1 -
Forecasting
2 -
Forgot Password
2 -
Garbage Collection
1 -
Garbage Collection Optimization
1 -
Github
2 -
Github actions
2 -
Github Repo
2 -
Gitlab
1 -
GKE
1 -
Global Init Script
1 -
Global init scripts
4 -
Governance
1 -
Hi
1 -
Horovod
1 -
Html
1 -
Hyperopt
4 -
Hyperparameter Tuning
2 -
Iam
1 -
Image
3 -
Image Data
1 -
Inference Setup Error
1 -
INFORMATION
1 -
Input
1 -
Insert
1 -
Instance Profile
1 -
Int
2 -
Interactive cluster
1 -
Internal error
1 -
INVALID STATE
1 -
Invalid Type Code
1 -
IP
1 -
Ipython
1 -
Ipywidgets
1 -
JDBC Connections
1 -
Jira
1 -
Job
4 -
Job Parameters
1 -
Job Runs
1 -
JOBS
5 -
Jobs & Workflows
6 -
Join
1 -
Jsonfile
1 -
Kafka consumer
1 -
Key Management
1 -
Kinesis
1 -
Lakehouse
1 -
Large Datasets
1 -
Latest Version
1 -
Learning
1 -
Limit
3 -
LLM
3 -
Local computer
1 -
Local Machine
1 -
Log Model
2 -
Logging
1 -
Login
1 -
Logs
1 -
Long Time
2 -
Low Latency APIs
2 -
LTS ML
3 -
Machine
3 -
Machine Learning
24 -
Machine Learning Associate
1 -
Managed Table
1 -
Max Retries
1 -
Maximum Number
1 -
Medallion Architecture
1 -
Memory
3 -
Merge Into
1 -
Metadata
1 -
Metrics
3 -
Microsoft azure
1 -
ML Lifecycle
4 -
ML Model
4 -
ML Pipeline
2 -
ML Practioner
3 -
ML Runtime
1 -
MlFlow
75 -
MLflow API
5 -
MLflow Artifacts
2 -
MLflow Experiment
6 -
MLflow Experiments
3 -
Mlflow Model
10 -
Mlflow project
4 -
Mlflow registry
3 -
Mlflow Run
1 -
Mlflow Server
5 -
MLFlow Tracking Server
3 -
MLModel
1 -
MLModels
2 -
Model Deployment
4 -
Model Lifecycle
6 -
Model Loading
2 -
Model Monitoring
1 -
Model registry
5 -
Model Serving
1 -
Model Serving Cluster
2 -
Model Serving REST API
6 -
Model Training
2 -
Model Tuning
1 -
Models
8 -
Module
3 -
Modulenotfounderror
1 -
MongoDB
1 -
Mount Point
1 -
Mounts
1 -
Multi
1 -
Multiline
1 -
Multiple users
1 -
Nested
1 -
New
2 -
New Feature
1 -
New Features
1 -
New Workspace
1 -
Nlp
3 -
Note
1 -
Notebook
6 -
Notebooks
5 -
Notification
2 -
Object
3 -
Onboarding
1 -
Online Feature Store Table
1 -
OOM Error
1 -
Open source
2 -
Open Source MLflow
4 -
Optimization
2 -
Optimize Command
1 -
OSS
3 -
Overwatch
1 -
Overwrite
2 -
Packages
2 -
Pandas udf
4 -
Pandas_udf
1 -
Parallel
1 -
Parallel processing
1 -
Parallel Runs
1 -
Parallelism
1 -
Parameter
2 -
PARAMETER VALUE
2 -
Partner Academy
1 -
Pending State
2 -
Performance Tuning
1 -
Permission
1 -
Photon Engine
1 -
Pickle
1 -
Pickle Files
2 -
Pip
2 -
Pipeline Model
1 -
Points
1 -
Possible
1 -
Postgres
1 -
Presentation
1 -
Pricing
2 -
Primary Key
1 -
Primary Key Constraint
1 -
Production
2 -
Progress bar
2 -
Proven Practices
2 -
Public
2 -
Pycharm IDE
1 -
Pymc3 Models
2 -
PyPI
1 -
Pyspark
6 -
Python
21 -
Python API
1 -
Python Code
1 -
Python Function
3 -
Python Libraries
1 -
Python Packages
1 -
Python Project
1 -
Python script
1 -
Pytorch
3 -
R Shiny
1 -
Reading-excel
2 -
Redis
2 -
Region
1 -
Remote RPC Client
1 -
Rest
1 -
Rest API
14 -
RESTAPI
1 -
Result
1 -
Runtime update
1 -
S3bucket
2 -
Sagemaker
1 -
Salesforce
1 -
SAP
1 -
Scalability
1 -
Scalable Machine
2 -
Schema
4 -
Schema evolution
1 -
Script
1 -
Search
1 -
Security
2 -
Security Exception
1 -
Self Service Notebooks
1 -
Server
1 -
Serverless
1 -
Serverless Real
1 -
Serving
1 -
Sf Username
1 -
Shap
2 -
Similar Issue
1 -
Similar Support
1 -
Size
1 -
Sklearn
1 -
Slow
1 -
Small Scale Experimentation
1 -
Source Table
1 -
Spark
13 -
Spark config
1 -
Spark connector
1 -
Spark Error
1 -
Spark MLlib
2 -
Spark Pandas Api
1 -
Spark ui
1 -
Spark Version
2 -
Spark-submit
1 -
SparkML Models
2 -
Sparknlp
3 -
Spot
1 -
SQL
19 -
SQL Editor
1 -
SQL Queries
1 -
SQL Visualizations
1 -
Stage failure
2 -
Storage
3 -
Stream
2 -
Stream Data
1 -
Structtype
1 -
Structured streaming
2 -
Study Material
1 -
Summit23
2 -
Support
1 -
Support Team
1 -
Synapse
1 -
Synapse ML
1 -
Table
4 -
Table access control
1 -
Tableau
1 -
Task
1 -
Temporary View
1 -
Tensor flow
1 -
Test
1 -
Timeseries
1 -
Timestamps
1 -
TODAY
1 -
Training
6 -
Transaction Log
1 -
Trying
1 -
Tuning
2 -
UAT
1 -
Ui
1 -
Unexpected Error
1 -
Unity Catalog
12 -
Use Case
2 -
Use cases
1 -
UTC
2 -
Uuid
1 -
Validate ML Model
2 -
Values
1 -
Variable
1 -
Vector
1 -
Versioncontrol
1 -
Visualization
2 -
Web App Azure Databricks
1 -
Web ui
1 -
Weekly Release Notes
2 -
Whl
1 -
Worker Nodes
1 -
Workflow
2 -
Workflow Jobs
1 -
Workspace
2 -
Write
1 -
Writing
1 -
Xgboost
2 -
Z-ordering
1 -
Zorder
1
- « Previous
- Next »
User | Count |
---|---|
89 | |
39 | |
36 | |
25 | |
25 |