- 40 Views
- 1 replies
- 0 kudos
Best practices for structuring databricks workspaces for CI/CD and ML workflows
Hi everyone,I’m designing the CI/CD process for our environment environment focused on machine learning and data science projects, and I’d like to understand what the best practices are regarding workspace organization—especially when using Unity Cat...
- 40 Views
- 1 replies
- 0 kudos
- 0 kudos
When designing a CI/CD process for Databricks environments — especially for machine learning and data science projects using Unity Catalog — enterprise-scale workspace organization should balance isolation, governance, and collaboration. The recommen...
- 0 kudos
- 57 Views
- 2 replies
- 0 kudos
Safe Update Strategy for Online Feature Store Without Endpoint Disruption
Hi Team,We are implementing Databricks Online Feature Store using Lakebase architecture and have run into some constraints during development:Requirements:Deploy an offline table as a synced online table and create a feature spec that queries from th...
- 57 Views
- 2 replies
- 0 kudos
- 0 kudos
The recommended way to safely update an online Databricks Feature Store without breaking the serving endpoint or causing downtime involves a version-controlled, atomic update pattern that preserves schema consistency and endpoint stability. Key Issue...
- 0 kudos
- 37 Views
- 2 replies
- 0 kudos
how to speed up inference?
Hi guys,I'm new to this concept, but we have several ML models that follow the same structure from the code. What I don’t fully understand is how to handle different types of models efficiently — right now, I need to loop through my items to get the ...
- 37 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @jeremy98 I have not tried this - but could using Python's multiprocessing library to assign the inference for different models to different CPU cores be something you would want to give an attempt? Also here's a useful blog - https://docs.datab...
- 0 kudos
- 44 Views
- 1 replies
- 1 kudos
How does Databricks AutoML handle null imputation for categorical features by default?
Hi everyone I’m using Databricks AutoML (classification workflow) on Databricks Runtime 10.4 LTS ML+, and I’d like to clarify how missing (null) values are handled for categorical (string) columns by default.From the AutoML documentation, I see that:...
- 44 Views
- 1 replies
- 1 kudos
- 1 kudos
Hello @spearitchmeta , I looked internally to see if I could help with this and I found some information that will shed light on your question. Here’s how missing (null) values in categorical (string) columns are handled in Databricks AutoML on Dat...
- 1 kudos
- 2611 Views
- 1 replies
- 1 kudos
Can I Replicate Azure Document Intelligence's Custom Table Extraction in Databricks?
I am using Azure Document Intelligence to get data from a table in a PDF file. The table's headers do not visually align with the values. Therefore, the standard and pre-built models cannot correctly read the data.I have built a custom-trained Azure ...
- 2611 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @AlbertWang, you can easily achieve this using AgenBricks - Information Extraction. Your PDFs will be converted to text using the ai_parse_document function and saved in a Databricks table. You can then create the agent using that text table to ge...
- 1 kudos
- 3122 Views
- 3 replies
- 7 kudos
Spark context not implemented Error when using Databricks connect
I am developing an application using databricks connect and when I try to use VectorAssembler I get the Error sc is not none Assertion Error. is there a workaround for this ?
- 3122 Views
- 3 replies
- 7 kudos
- 7 kudos
Ran into exactly the same issue as @Łukasz1 After some googling, I found this SO post explaining the issue: later versions of databricks connect no longer support the SparkContext API. Our code is failing because the underlying library is trying to f...
- 7 kudos
- 200 Views
- 1 replies
- 1 kudos
Best Practices for Collaborative Notebook Development in Databricks
Hi everyone! I’m looking to learn more about effective strategies for collaborative development in Databricks notebooks. Since notebooks are often used by multiple data scientists, analysts, and engineers, managing collaboration efficiently is critic...
- 200 Views
- 1 replies
- 1 kudos
- 1 kudos
For version control, use this approach.Git Integration with Databricks ReposCore Features:Databricks Git Folders (Repos) provides native Git integration with visual UI and REST API access Supports all major providers: GitHub, GitLab, Azure DevOps, Bi...
- 1 kudos
- 2184 Views
- 4 replies
- 2 kudos
Resolved! Unable to Access Delta View from Azure Machine Learning via Delta Sharing – Is View Access Supported
Unable to Access Delta View from Azure Machine Learning via Delta Sharing – Is View Access Supported?I am able to access the tables but while accessing the view I am getting below error.Response from server: { 'details': [ { '@type': 'type.googleapis...
- 2184 Views
- 4 replies
- 2 kudos
- 2 kudos
View sharing is supported (launched GA) in Databricks. See https://docs.databricks.com/aws/en/delta-sharing/create-share#add-views-to-a-share. You likely need a workspace id override. Creating the recipient from a workspace with proper access and res...
- 2 kudos
- 219 Views
- 1 replies
- 0 kudos
GenAI experiment tracing does not render markdown images
When traces include base64 encoded images in Markdown, they do not render properly. This makes the analysis of traces including images difficult.Just for context, the same trace in other tracing tools like LangSmith renders as expected. An example of...
- 219 Views
- 1 replies
- 0 kudos
- 0 kudos
Thank you for the for the flag juandados! I will ping my product team to get a timeline for you.
- 0 kudos
- 753 Views
- 1 replies
- 1 kudos
AutoML Forecast fails when using feature_store_lookups with timestamp key
We are running AutoML Forecast on Databricks Runtime 15.4 ML LTS and 16.4 ML LTS, using a time series dataset with temporal covariates from the Feature Store (e.g. a corona_dummy feature). We use feature_store_lookups with lookup_key and timestamp_lo...
- 753 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @ostae911 , are you still facing this issue? It looks like your usage of the timestamp column is correct. It can be used as a primary key on the time series feature table. Is it possible that there are other duplicate columns between the training ...
- 1 kudos
- 1410 Views
- 3 replies
- 1 kudos
Resolved! Serving Endpoint Disappears After One Day
I'm encountering an issue where a serving endpoint I create disappears from the list of serving endpoints after a day. This has happened both when I created the endpoint from the Databricks UI and using the Databricks SDK.
- 1410 Views
- 3 replies
- 1 kudos
- 1 kudos
Hey @prashant_089 , what you are experiencing should not happen on its own except for some extremely outlying circumstanctes. IF YOU ARE USING Databricks Free Edition you shold ignore everything below. Here are some troubleshooting suggestions/tips: ...
- 1 kudos
- 2199 Views
- 3 replies
- 0 kudos
Resolved! Problem loading a pyfunc model in job run
Hi, I'm currently working on a automated job to predict forecasts using a notebook than work just fine when I run it manually, but keep failling when schedueled, here is my code: import mlflow # Load model as a PyFuncModel. loaded_model = mlflow.pyf...
- 2199 Views
- 3 replies
- 0 kudos
- 0 kudos
Hey AmineM! If your MLflow model loads fine in a Databricks notebook but fails in a scheduled job on serverless compute with an error like: TypeError: code() argument 13 must be str, not int the root cause is almost always a mismatch between the ...
- 0 kudos
- 1036 Views
- 4 replies
- 2 kudos
Resolved! What is the most efficient way of running sentence-transformers on a Spark DataFrame column?
We're trying to run the bundled sentence-transformers library from SBert in a notebook running Databricks ML 16.4 on an AWS g4dn.2xlarge [T4] instance.However, we're experiencing out of memory crashes and are wondering what the optimal to run sentenc...
- 1036 Views
- 4 replies
- 2 kudos
- 2 kudos
If you didn't get this to work with Pandas API on Spark, you might also try importing and instantiating the SentenceTransformer model inside the pandas UDF for proper distributed execution. Each executor runs code independently, and when Spark execut...
- 2 kudos
- 255 Views
- 1 replies
- 0 kudos
Inference Tables Empty
Hello,I have been using Databricks Free Platform for a while. Everything seems to work well. However, I've been trying to generate the payload from the deployed endpoint and I got always an empty inference table.When I check the configuration, I got ...
- 255 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @salesbrj ,Most probably this will be related to limitation in Free Edition. In limitations section I can see following entry:No custom models on GPU or batch inferencehttps://docs.databricks.com/aws/en/getting-started/free-edition-limitations
- 0 kudos
- 1391 Views
- 3 replies
- 1 kudos
Distributed SparkXGBRanker training: failed barrier ResultStage
I'm following a variation of the tutorial [here](https://assets.docs.databricks.com/_extras/notebooks/source/xgboost-pyspark-new.html) to train an `SparkXGBRanker` in distributed mode. However, the line:pipeline_model = pipeline.fit(data) Is throwing...
- 1391 Views
- 3 replies
- 1 kudos
- 1 kudos
You have already mentioned you did turn off autoscaling, please try the num_workers too Step 1: Disable Dynamic Resource Allocation: Use spark.dynamicAllocation.enabled = false Step 2: Configure num_workers to Match Fixed Resources After disabling dy...
- 1 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
3 -
Access Data
2 -
AccessKeyVault
1 -
ADB
2 -
Airflow
1 -
Amazon
2 -
Apache
1 -
Apache spark
3 -
APILimit
1 -
Artifacts
1 -
Audit
1 -
Autoloader
6 -
Autologging
2 -
Automation
2 -
Automl
39 -
Aws databricks
1 -
AWSSagemaker
1 -
Azure
32 -
Azure active directory
1 -
Azure blob storage
2 -
Azure data lake
1 -
Azure Data Lake Storage
3 -
Azure data lake store
1 -
Azure databricks
32 -
Azure event hub
1 -
Azure key vault
1 -
Azure sql database
1 -
Azure Storage
2 -
Azure synapse
1 -
Azure Unity Catalog
1 -
Azure vm
1 -
AzureML
2 -
Bar
1 -
Beta
1 -
Better Way
1 -
BI Integrations
1 -
BI Tool
1 -
Billing and Cost Management
1 -
Blob
1 -
Blog
1 -
Blog Post
1 -
Broadcast variable
1 -
Business Intelligence
1 -
CatalogDDL
1 -
Centralized Model Registry
1 -
Certification
2 -
Certification Badge
1 -
Change
1 -
Change Logs
1 -
Check
2 -
Classification Model
1 -
Cloud Storage
1 -
Cluster
10 -
Cluster policy
1 -
Cluster Start
1 -
Cluster Termination
2 -
Clustering
1 -
ClusterMemory
1 -
CNN HOF
1 -
Column names
1 -
Community Edition
1 -
Community Edition Password
1 -
Community Members
1 -
Company Email
1 -
Condition
1 -
Config
1 -
Configure
3 -
Confluent Cloud
1 -
Container
2 -
ContainerServices
1 -
Control Plane
1 -
ControlPlane
1 -
Copy
1 -
Copy into
2 -
CosmosDB
1 -
Courses
2 -
Csv files
1 -
Dashboards
1 -
Data
8 -
Data Engineer Associate
1 -
Data Engineer Certification
1 -
Data Explorer
1 -
Data Ingestion
2 -
Data Ingestion & connectivity
11 -
Data Quality
1 -
Data Quality Checks
1 -
Data Science & Engineering
2 -
databricks
5 -
Databricks Academy
3 -
Databricks Account
1 -
Databricks AutoML
9 -
Databricks Cluster
3 -
Databricks Community
5 -
Databricks community edition
4 -
Databricks connect
1 -
Databricks dbfs
1 -
Databricks Feature Store
1 -
Databricks Job
1 -
Databricks Lakehouse
1 -
Databricks Mlflow
4 -
Databricks Model
2 -
Databricks notebook
10 -
Databricks ODBC
1 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Runtime
9 -
Databricks SQL
8 -
Databricks SQL Permission Problems
1 -
Databricks Terraform
1 -
Databricks Training
2 -
Databricks Unity Catalog
1 -
Databricks V2
1 -
Databricks version
1 -
Databricks Workflow
2 -
Databricks Workflows
1 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksContainer
1 -
DatabricksML
6 -
Dataframe
3 -
DataSharing
1 -
Datatype
1 -
DataVersioning
1 -
Date Column
1 -
Dateadd
1 -
DB Notebook
1 -
DB Runtime
1 -
DBFS
5 -
DBFS Rest Api
1 -
Dbt
1 -
Dbu
1 -
DDL
1 -
DDP
1 -
Dear Community
1 -
DecisionTree
1 -
Deep learning
4 -
Default Location
1 -
Delete
1 -
Delt Lake
4 -
Delta lake table
1 -
Delta Live
1 -
Delta Live Tables
6 -
Delta log
1 -
Delta Sharing
3 -
Delta-lake
1 -
Deploy
1 -
DESC
1 -
Details
1 -
Dev
1 -
Devops
1 -
Df
1 -
Different Notebook
1 -
Different Parameters
1 -
DimensionTables
1 -
Directory
3 -
Disable
1 -
Distribution
1 -
DLT
6 -
DLT Pipeline
3 -
Dolly
5 -
Dolly Demo
2 -
Download
2 -
EC2
1 -
Emr
2 -
Ensemble Models
1 -
Environment Variable
1 -
Epoch
1 -
Error handling
1 -
Error log
2 -
Eventhub
1 -
Example
1 -
Experiments
4 -
External Sources
1 -
Extract
1 -
Fact Tables
1 -
Failure
2 -
Feature Lookup
2 -
Feature Store
61 -
Feature Store API
2 -
Feature Store Table
1 -
Feature Table
6 -
Feature Tables
4 -
Features
2 -
FeatureStore
2 -
File Path
2 -
File Size
1 -
Fine Tune Spark Jobs
1 -
Forecasting
2 -
Forgot Password
2 -
Garbage Collection
1 -
Garbage Collection Optimization
1 -
Github
2 -
Github actions
2 -
Github Repo
2 -
Gitlab
1 -
GKE
1 -
Global Init Script
1 -
Global init scripts
4 -
Governance
1 -
Hi
1 -
Horovod
1 -
Html
1 -
Hyperopt
4 -
Hyperparameter Tuning
2 -
Iam
1 -
Image
3 -
Image Data
1 -
Inference Setup Error
1 -
INFORMATION
1 -
Input
1 -
Insert
1 -
Instance Profile
1 -
Int
2 -
Interactive cluster
1 -
Internal error
1 -
Invalid Type Code
1 -
IP
1 -
Ipython
1 -
Ipywidgets
1 -
JDBC Connections
1 -
Jira
1 -
Job
4 -
Job Parameters
1 -
Job Runs
1 -
Join
1 -
Jsonfile
1 -
Kafka consumer
1 -
Key Management
1 -
Kinesis
1 -
Lakehouse
1 -
Large Datasets
1 -
Latest Version
1 -
Learning
1 -
Limit
3 -
LLM
3 -
LLMs
2 -
Local computer
1 -
Local Machine
1 -
Log Model
2 -
Logging
1 -
Login
1 -
Logs
1 -
Long Time
2 -
Low Latency APIs
2 -
LTS ML
3 -
Machine
3 -
Machine Learning
24 -
Machine Learning Associate
1 -
Managed Table
1 -
Max Retries
1 -
Maximum Number
1 -
Medallion Architecture
1 -
Memory
3 -
Metadata
1 -
Metrics
3 -
Microsoft azure
1 -
ML Lifecycle
4 -
ML Model
4 -
ML Practioner
3 -
ML Runtime
1 -
MlFlow
75 -
MLflow API
5 -
MLflow Artifacts
2 -
MLflow Experiment
6 -
MLflow Experiments
3 -
Mlflow Model
10 -
Mlflow registry
3 -
Mlflow Run
1 -
Mlflow Server
5 -
MLFlow Tracking Server
3 -
MLModels
2 -
Model Deployment
4 -
Model Lifecycle
6 -
Model Loading
2 -
Model Monitoring
1 -
Model registry
5 -
Model Serving
14 -
Model Serving Cluster
2 -
Model Serving REST API
6 -
Model Training
2 -
Model Tuning
1 -
Models
8 -
Module
3 -
Modulenotfounderror
1 -
MongoDB
1 -
Mount Point
1 -
Mounts
1 -
Multi
1 -
Multiline
1 -
Multiple users
1 -
Nested
1 -
New Feature
1 -
New Features
1 -
New Workspace
1 -
Nlp
3 -
Note
1 -
Notebook
6 -
Notification
2 -
Object
3 -
Onboarding
1 -
Online Feature Store Table
1 -
OOM Error
1 -
Open Source MLflow
4 -
Optimization
2 -
Optimize Command
1 -
OSS
3 -
Overwatch
1 -
Overwrite
2 -
Packages
2 -
Pandas udf
4 -
Pandas_udf
1 -
Parallel
1 -
Parallel processing
1 -
Parallel Runs
1 -
Parallelism
1 -
Parameter
2 -
PARAMETER VALUE
2 -
Partner Academy
1 -
Pending State
2 -
Performance Tuning
1 -
Photon Engine
1 -
Pickle
1 -
Pickle Files
2 -
Pip
2 -
Points
1 -
Possible
1 -
Postgres
1 -
Pricing
2 -
Primary Key
1 -
Primary Key Constraint
1 -
Progress bar
2 -
Proven Practices
2 -
Public
2 -
Pymc3 Models
2 -
PyPI
1 -
Pyspark
6 -
Python
21 -
Python API
1 -
Python Code
1 -
Python Function
3 -
Python Libraries
1 -
Python Packages
1 -
Python Project
1 -
Pytorch
3 -
Reading-excel
2 -
Redis
2 -
Region
1 -
Remote RPC Client
1 -
RESTAPI
1 -
Result
1 -
Runtime update
1 -
Sagemaker
1 -
Salesforce
1 -
SAP
1 -
Scalability
1 -
Scalable Machine
2 -
Schema evolution
1 -
Script
1 -
Search
1 -
Security
2 -
Security Exception
1 -
Self Service Notebooks
1 -
Server
1 -
Serverless
1 -
Serving
1 -
Shap
2 -
Size
1 -
Sklearn
1 -
Slow
1 -
Small Scale Experimentation
1 -
Source Table
1 -
Spark config
1 -
Spark connector
1 -
Spark Error
1 -
Spark MLlib
2 -
Spark Pandas Api
1 -
Spark ui
1 -
Spark Version
2 -
Spark-submit
1 -
SparkML Models
2 -
Sparknlp
3 -
Spot
1 -
SQL
19 -
SQL Editor
1 -
SQL Queries
1 -
SQL Visualizations
1 -
Stage failure
2 -
Storage
3 -
Stream
2 -
Stream Data
1 -
Structtype
1 -
Structured streaming
2 -
Study Material
1 -
Summit23
2 -
Support
1 -
Support Team
1 -
Synapse
1 -
Synapse ML
1 -
Table
4 -
Table access control
1 -
Tableau
1 -
Task
1 -
Temporary View
1 -
Tensor flow
1 -
Test
1 -
Timeseries
1 -
Timestamps
1 -
TODAY
1 -
Training
6 -
Transaction Log
1 -
Trying
1 -
Tuning
2 -
UAT
1 -
Ui
1 -
Unexpected Error
1 -
Unity Catalog
12 -
Use Case
2 -
Use cases
1 -
Uuid
1 -
Validate ML Model
2 -
Values
1 -
Variable
1 -
Vector
1 -
Versioncontrol
1 -
Visualization
2 -
Web App Azure Databricks
1 -
Weekly Release Notes
2 -
Whl
1 -
Worker Nodes
1 -
Workflow
2 -
Workflow Jobs
1 -
Workspace
2 -
Write
1 -
Writing
1 -
Z-ordering
1 -
Zorder
1
- « Previous
- Next »
| User | Count |
|---|---|
| 89 | |
| 39 | |
| 38 | |
| 25 | |
| 25 |