- 556 Views
- 9 replies
- 1 kudos
Genie connection to copilot agent in copilot studio
Hello!I’m trying to add a tool — Azure Databricks Genie — in Microsoft Copilot Studio for my agent, but I’m running into some difficulties. Is it possible to establish this connection using a Pro cluster, or does it only work with a serverless cluste...
- 556 Views
- 9 replies
- 1 kudos
- 1 kudos
I'm afraid I don't have much further suggestions. I'd suggest you raise a ticket with Microsoft on this.
- 1 kudos
- 60 Views
- 1 replies
- 1 kudos
What are the recommended practices for handling skewed datasets in Databricks?
What should you do when your dataset is uneven—some values appear too many times and others appear very few times—while working in Databricks?
- 60 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @Suheb ,Refer to really good guide prepared by Databricks team. When you have a skewed dataset the primary things you can do are following:1. Filter skewed values2. Apply Skew hints3. AQE skew optimization4. SaltingMuch detailed description of abo...
- 1 kudos
- 141 Views
- 3 replies
- 0 kudos
Migrated model to Unity catalog not seeing referenced serving endpoint
There was a model which was migrated from workspace model registry to unity catalog. At the time of initial creation of that model, dependencies to other databricks serving endpoints were configured using "DatabricksServiceEndpoint" config in mlflow....
- 141 Views
- 3 replies
- 0 kudos
- 0 kudos
Workspace model registry worked with workspace-scoped serving endpoints. UC models and UC serving endpoints use metastore-wide semantics and different lookup rules. The saved path inside the model metadata still points to workspace-level endpoints th...
- 0 kudos
- 84 Views
- 1 replies
- 0 kudos
UC Model Deployment across data bricks instances
Hello, We have multiple data bricks instances each represents an environment dev,qa,rel,prod etc.. We developed a model in the dev workspace and registered in the UC model registry using mlflow. Now, we are trying to find a best way to deploy this r...
- 84 Views
- 1 replies
- 0 kudos
- 0 kudos
You can use UC's centralized model registry and MLflow’s copy APIs. If all target workspaces attach to the same Unity Catalog metastore, reference and promote models via their 3‑level UC names; use MLflow’s copy_model_version to “copy” the exact arti...
- 0 kudos
- 10997 Views
- 5 replies
- 4 kudos
How to use Parallel processing using Concurrent Jobs in Databricks ?
QuestionIt would be great if you could recommend how I go about solving the below problem. I haven't been able to find much help online. A. Background:A1. I have to text manipulation using python (like concatenation , convert to spacy doc , get verbs...
- 10997 Views
- 5 replies
- 4 kudos
- 4 kudos
I have to process data for n number of devices which is sending data in every 5 seconds.I have a similar scenario where I have to take last 3 hours of data and process it for all the devices for some key parameters. Now if I am doing it sequentially ...
- 4 kudos
- 170 Views
- 1 replies
- 0 kudos
ai_parse_document Not Extracting Text from Images in PDF
Hello Team,I hope you are doing well.I am a student currently exploring Databricks and learning how to work with the "ai parse document" function. While experimenting, I encountered a couple of issues related to text extraction from images inside PDF...
- 170 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @rajcoder! This post appears to duplicate the one you recently posted. A response has already been provided to your recent post. I recommend continuing the discussion in that thread to keep the conversation focused and organised.
- 0 kudos
- 171 Views
- 0 replies
- 0 kudos
Building a Claude Code-Compatible Proxy for Databricks (with MCP, Git Tools, and Prompt Caching)
Many Databricks engineers have asked whether it's possible to use Claude Code CLI directly against Databricks-hosted Claude models instead of Anthropic's cloud API. This enables repo-aware AI workflows—navigation, diffs, testing, MCP tools—right insi...
- 171 Views
- 0 replies
- 0 kudos
- 6251 Views
- 4 replies
- 2 kudos
Accessing Databricks Volumes from a Serving Endpoint Using a Custom Model Class in Unity Catalog
Hi everyone,I’m looking for accessing Unity Catalog (UC) Volumes from a Databricks Serving Endpoint. Here’s my current setup:I have a custom AI model class for inference, which I logged into Unity Catalog using mlflow.pyfunc.log_model.I’ve created a ...
- 6251 Views
- 4 replies
- 2 kudos
- 2 kudos
Serverless Model Serving does not mount the UC Volumes FUSE path (/Volumes), so references to “/Volumes/…” inside a custom pyfunc’s model code will fail at container build or runtime. The correct pattern is to package any required files (like your ...
- 2 kudos
- 201 Views
- 2 replies
- 1 kudos
Resolved! Can serverless environments not use SynapseML's LightGBM?
When I use LightGBM, I get the following error on the line below: 'str' object has no attribute 'getParam'.Is this because serverless cannot run the JAR files that SynapseML depends on?File /local_disk0/.ephemeral_nfs/envs/pythonEnv-b0d5f8ce-8426-443...
- 201 Views
- 2 replies
- 1 kudos
- 1 kudos
Sorry, I just found out while checking the official documentation that starting from November, dependency JAR files are supported.I’ll give it a try and see how well it works.If anyone has any insights, I would greatly appreciate your guidance.
- 1 kudos
- 348 Views
- 2 replies
- 3 kudos
Resolved! Databricks Model Serving Endpoint Fails: “_USER not found for feature table”
Hi Databricks Community,I’m trying to deploy a model serving endpoint that uses Databricks Feature Store (Unity Catalog, online tables).My offline and online feature tables are created and visible in Databricks.The model is logged with FeatureEnginee...
- 348 Views
- 2 replies
- 3 kudos
- 3 kudos
Thanks for the reply It is very useful and comprehensive.I managed to find another solution to the problem so I wanted to share some additional details on this topic:I was using 15.4 LTS ML Runtime, this could have caused the problem - I did not swit...
- 3 kudos
- 3967 Views
- 5 replies
- 1 kudos
Resolved! Lacking support for column-level select grants or attribute-based access control
In the Unity Catalog launch and its accompanying blog post, one of the primary selling points was a set of granular access control features that would at least partially eliminate the need to create a multitude of separate table views and the attenda...
- 3967 Views
- 5 replies
- 1 kudos
- 1 kudos
@at-khatri Databricks ABAC is in Public Preview nowhttps://docs.databricks.com/aws/en/data-governance/unity-catalog/abac/
- 1 kudos
- 349 Views
- 1 replies
- 4 kudos
Hackathon Project: Recipe Recommendation Engine with Traditional ML + Genie on Databricks Free Edit
Hi everyone, For the Databricks Free Edition Hackathon, I wanted to show that traditional ML still has a big role today, and how it can work hand-in-hand with Databricks’ newer AI tooling. As a concrete use case, I created a recipe recommendation eng...
- 349 Views
- 1 replies
- 4 kudos
- 4 kudos
This is amazing @hasnat_unifeye. Well done and good luck for the hackathon.
- 4 kudos
- 239 Views
- 1 replies
- 0 kudos
AutoML Deprecation?
Hi All,It looks like AutoML is set to be deprecated with the next major version (although the note isn't specific on if that's 18). I haven't seen any announcement or alert about this impending change. Did I just miss it? I know we have teams using t...
- 239 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @kevin11 ,I guess it's their standard way of library deprecation policy. In their docs they mentioned that when a library is planned for removal, Databricks takes following steps to notify customers:So they've added those note to AutoMl docs:And y...
- 0 kudos
- 399 Views
- 1 replies
- 0 kudos
Resolved! How to store & update a FAISS Index in Databricks
I’m currently using FAISS in a Databricks notebook to perform semantic search in text data. My current workflow looks like this:encode ~10k text entries using an embedding model.build a FAISS index in memory.run similarity searches using index.search...
- 399 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @ashfire , Here’s a practical path to scale your FAISS workflow on Databricks, along with patterns to persist indexes, incrementally add embeddings, and keep metadata aligned. Best practice to persist/load FAISS indexes on Databricks Use faiss...
- 0 kudos
- 3679 Views
- 2 replies
- 0 kudos
DAB - Add/remove task depending on workspace.
I use DAB for deploying Jobs, I want to add a specific Task in dev only but not in staging or prod. Is there any way to achieve this using DAB ?
- 3679 Views
- 2 replies
- 0 kudos
- 0 kudos
You can define specific resources by target in DAB as shown here. This is valid for jobs and/or tasks:For instance, in my case:I think, best option (but not available as far as I know) would be to be able to define "include" sections by target, inste...
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
3 -
Access Data
2 -
AccessKeyVault
1 -
ADB
2 -
Airflow
1 -
Amazon
2 -
Apache
1 -
Apache spark
3 -
APILimit
1 -
Artifacts
1 -
Audit
1 -
Autoloader
6 -
Autologging
2 -
Automation
2 -
Automl
40 -
Aws databricks
1 -
AWSSagemaker
1 -
Azure
32 -
Azure active directory
1 -
Azure blob storage
2 -
Azure data lake
1 -
Azure Data Lake Storage
3 -
Azure data lake store
1 -
Azure databricks
32 -
Azure event hub
1 -
Azure key vault
1 -
Azure sql database
1 -
Azure Storage
2 -
Azure synapse
1 -
Azure Unity Catalog
1 -
Azure vm
1 -
AzureML
2 -
Bar
1 -
Beta
1 -
Better Way
1 -
BI Integrations
1 -
BI Tool
1 -
Billing and Cost Management
1 -
Blob
1 -
Blog
1 -
Blog Post
1 -
Broadcast variable
1 -
Business Intelligence
1 -
CatalogDDL
1 -
Centralized Model Registry
1 -
Certification
2 -
Certification Badge
1 -
Change
1 -
Change Logs
1 -
Check
2 -
Classification Model
1 -
Cloud Storage
1 -
Cluster
10 -
Cluster policy
1 -
Cluster Start
1 -
Cluster Termination
2 -
Clustering
1 -
ClusterMemory
1 -
CNN HOF
1 -
Column names
1 -
Community Edition
1 -
Community Edition Password
1 -
Community Members
1 -
Company Email
1 -
Condition
1 -
Config
1 -
Configure
3 -
Confluent Cloud
1 -
Container
2 -
ContainerServices
1 -
Control Plane
1 -
ControlPlane
1 -
Copy
1 -
Copy into
2 -
CosmosDB
1 -
Courses
2 -
Csv files
1 -
Dashboards
1 -
Data
8 -
Data Engineer Associate
1 -
Data Engineer Certification
1 -
Data Explorer
1 -
Data Ingestion
2 -
Data Ingestion & connectivity
11 -
Data Quality
1 -
Data Quality Checks
1 -
Data Science & Engineering
2 -
databricks
5 -
Databricks Academy
3 -
Databricks Account
1 -
Databricks AutoML
9 -
Databricks Cluster
3 -
Databricks Community
5 -
Databricks community edition
4 -
Databricks connect
1 -
Databricks dbfs
1 -
Databricks Feature Store
1 -
Databricks Job
1 -
Databricks Lakehouse
1 -
Databricks Mlflow
4 -
Databricks Model
2 -
Databricks notebook
10 -
Databricks ODBC
1 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Runtime
9 -
Databricks SQL
8 -
Databricks SQL Permission Problems
1 -
Databricks Terraform
1 -
Databricks Training
2 -
Databricks Unity Catalog
1 -
Databricks V2
1 -
Databricks version
1 -
Databricks Workflow
2 -
Databricks Workflows
1 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksContainer
1 -
DatabricksML
6 -
Dataframe
3 -
DataSharing
1 -
Datatype
1 -
DataVersioning
1 -
Date Column
1 -
Dateadd
1 -
DB Notebook
1 -
DB Runtime
1 -
DBFS
5 -
DBFS Rest Api
1 -
Dbt
1 -
Dbu
1 -
DDL
1 -
DDP
1 -
Dear Community
1 -
DecisionTree
1 -
Deep learning
4 -
Default Location
1 -
Delete
1 -
Delt Lake
4 -
Delta lake table
1 -
Delta Live
1 -
Delta Live Tables
6 -
Delta log
1 -
Delta Sharing
3 -
Delta-lake
1 -
Deploy
1 -
DESC
1 -
Details
1 -
Dev
1 -
Devops
1 -
Df
1 -
Different Notebook
1 -
Different Parameters
1 -
DimensionTables
1 -
Directory
3 -
Disable
1 -
Distribution
1 -
DLT
6 -
DLT Pipeline
3 -
Dolly
5 -
Dolly Demo
2 -
Download
2 -
EC2
1 -
Emr
2 -
Ensemble Models
1 -
Environment Variable
1 -
Epoch
1 -
Error handling
1 -
Error log
2 -
Eventhub
1 -
Example
1 -
Experiments
4 -
External Sources
1 -
Extract
1 -
Fact Tables
1 -
Failure
2 -
Feature Lookup
2 -
Feature Store
61 -
Feature Store API
2 -
Feature Store Table
1 -
Feature Table
6 -
Feature Tables
4 -
Features
2 -
FeatureStore
2 -
File Path
2 -
File Size
1 -
Fine Tune Spark Jobs
1 -
Forecasting
2 -
Forgot Password
2 -
Garbage Collection
1 -
Garbage Collection Optimization
1 -
Github
2 -
Github actions
2 -
Github Repo
2 -
Gitlab
1 -
GKE
1 -
Global Init Script
1 -
Global init scripts
4 -
Governance
1 -
Hi
1 -
Horovod
1 -
Html
1 -
Hyperopt
4 -
Hyperparameter Tuning
2 -
Iam
1 -
Image
3 -
Image Data
1 -
Inference Setup Error
1 -
INFORMATION
1 -
Input
1 -
Insert
1 -
Instance Profile
1 -
Int
2 -
Interactive cluster
1 -
Internal error
1 -
Invalid Type Code
1 -
IP
1 -
Ipython
1 -
Ipywidgets
1 -
JDBC Connections
1 -
Jira
1 -
Job
4 -
Job Parameters
1 -
Job Runs
1 -
Join
1 -
Jsonfile
1 -
Kafka consumer
1 -
Key Management
1 -
Kinesis
1 -
Lakehouse
1 -
Large Datasets
1 -
Latest Version
1 -
Learning
1 -
Limit
3 -
LLM
3 -
LLMs
3 -
Local computer
1 -
Local Machine
1 -
Log Model
2 -
Logging
1 -
Login
1 -
Logs
1 -
Long Time
2 -
Low Latency APIs
2 -
LTS ML
3 -
Machine
3 -
Machine Learning
24 -
Machine Learning Associate
1 -
Managed Table
1 -
Max Retries
1 -
Maximum Number
1 -
Medallion Architecture
1 -
Memory
3 -
Metadata
1 -
Metrics
3 -
Microsoft azure
1 -
ML Lifecycle
4 -
ML Model
4 -
ML Practioner
3 -
ML Runtime
1 -
MlFlow
75 -
MLflow API
5 -
MLflow Artifacts
2 -
MLflow Experiment
6 -
MLflow Experiments
3 -
Mlflow Model
10 -
Mlflow registry
3 -
Mlflow Run
1 -
Mlflow Server
5 -
MLFlow Tracking Server
3 -
MLModels
2 -
Model Deployment
4 -
Model Lifecycle
6 -
Model Loading
2 -
Model Monitoring
1 -
Model registry
5 -
Model Serving
15 -
Model Serving Cluster
2 -
Model Serving REST API
6 -
Model Training
2 -
Model Tuning
1 -
Models
8 -
Module
3 -
Modulenotfounderror
1 -
MongoDB
1 -
Mount Point
1 -
Mounts
1 -
Multi
1 -
Multiline
1 -
Multiple users
1 -
Nested
1 -
New Feature
1 -
New Features
1 -
New Workspace
1 -
Nlp
3 -
Note
1 -
Notebook
6 -
Notification
2 -
Object
3 -
Onboarding
1 -
Online Feature Store Table
1 -
OOM Error
1 -
Open Source MLflow
4 -
Optimization
2 -
Optimize Command
1 -
OSS
3 -
Overwatch
1 -
Overwrite
2 -
Packages
2 -
Pandas udf
4 -
Pandas_udf
1 -
Parallel
1 -
Parallel processing
1 -
Parallel Runs
1 -
Parallelism
1 -
Parameter
2 -
PARAMETER VALUE
2 -
Partner Academy
1 -
Pending State
2 -
Performance Tuning
1 -
Photon Engine
1 -
Pickle
1 -
Pickle Files
2 -
Pip
2 -
Points
1 -
Possible
1 -
Postgres
1 -
Pricing
2 -
Primary Key
1 -
Primary Key Constraint
1 -
Progress bar
2 -
Proven Practices
2 -
Public
2 -
Pymc3 Models
2 -
PyPI
1 -
Pyspark
6 -
Python
21 -
Python API
1 -
Python Code
1 -
Python Function
3 -
Python Libraries
1 -
Python Packages
1 -
Python Project
1 -
Pytorch
3 -
Reading-excel
2 -
Redis
2 -
Region
1 -
Remote RPC Client
1 -
RESTAPI
1 -
Result
1 -
Runtime update
1 -
Sagemaker
1 -
Salesforce
1 -
SAP
1 -
Scalability
1 -
Scalable Machine
2 -
Schema evolution
1 -
Script
1 -
Search
1 -
Security
2 -
Security Exception
1 -
Self Service Notebooks
1 -
Server
1 -
Serverless
1 -
Serving
1 -
Shap
2 -
Size
1 -
Sklearn
1 -
Slow
1 -
Small Scale Experimentation
1 -
Source Table
1 -
Spark config
1 -
Spark connector
1 -
Spark Error
1 -
Spark MLlib
2 -
Spark Pandas Api
1 -
Spark ui
1 -
Spark Version
2 -
Spark-submit
1 -
SparkML Models
2 -
Sparknlp
3 -
Spot
1 -
SQL
19 -
SQL Editor
1 -
SQL Queries
1 -
SQL Visualizations
1 -
Stage failure
2 -
Storage
3 -
Stream
2 -
Stream Data
1 -
Structtype
1 -
Structured streaming
2 -
Study Material
1 -
Summit23
2 -
Support
1 -
Support Team
1 -
Synapse
1 -
Synapse ML
1 -
Table
4 -
Table access control
1 -
Tableau
1 -
Task
1 -
Temporary View
1 -
Tensor flow
1 -
Test
1 -
Timeseries
1 -
Timestamps
1 -
TODAY
1 -
Training
6 -
Transaction Log
1 -
Trying
1 -
Tuning
2 -
UAT
1 -
Ui
1 -
Unexpected Error
1 -
Unity Catalog
12 -
Use Case
2 -
Use cases
1 -
Uuid
1 -
Validate ML Model
2 -
Values
1 -
Variable
1 -
Vector
1 -
Versioncontrol
1 -
Visualization
2 -
Web App Azure Databricks
1 -
Weekly Release Notes
2 -
Whl
1 -
Worker Nodes
1 -
Workflow
2 -
Workflow Jobs
1 -
Workspace
2 -
Write
1 -
Writing
1 -
Z-ordering
1 -
Zorder
1
- « Previous
- Next »
| User | Count |
|---|---|
| 90 | |
| 40 | |
| 38 | |
| 26 | |
| 25 |