- 1055 Views
- 2 replies
- 0 kudos
GitHub Actions workflow cannot find the Databricks Unity Catalog and its tables
Context: Running the train_model_py.py file stored in Databricks through GitHub Actions. The notebook reads the Unity Catalog tables for pre-processing and works fine when run through the Databricks UI. However, it gives an error when run through Git...
- 1055 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @sagarb, It sounds like a permission issue or setup issue... what is the error you are hitting?
- 0 kudos
- 2018 Views
- 0 replies
- 0 kudos
databricks-vectorsearch 0.53 unable to use similarity_search()
I have an issue with databricks-vectorsearch package. Version 0.51 suddenly stopped working this week because:It now expected me to provide azure_tenant_id in addition to service principal's client ID and secret.After supplying tenant ID, it showed s...
- 2018 Views
- 0 replies
- 0 kudos
- 2266 Views
- 1 replies
- 0 kudos
Resolved! Custom model serving using Databricks Asset Bundles
I am using MLFlow to register custom model (python model) in Unity Catalog, and Databricks Asset Bundle to create a serving endpoint for that custom model. I was able to create the serving endpoint using DABs, but I want to deploy the model by using ...
- 2266 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @MLOperator Since model_serving_endpoints only accepts a version number of a served entity, I think that is not possible. However, the get-by-alias version API can be used to retrieve a version number from a model alias name. Then the model name...
- 0 kudos
- 2991 Views
- 1 replies
- 0 kudos
Convert the tensorflow datatset to numpy tuples
Hello everyone ,Here are the sequence of steps i have followed:1. I have used petastorm to convert the spark dataframe to tf.datasetimport numpy as np# Read the Petastorm dataset and convert it to TensorFlow Datasetwith converter.make_tf_dataset() as...
- 2991 Views
- 1 replies
- 0 kudos
- 0 kudos
The error occurs because make_tf_dataset() returns an inferred_schema_view object, which is a Petastorm wrapper representing the dataset schema. This object does not have a .numpy() attribute, so calling batch.numpy() will throw the AttributeError. ...
- 0 kudos
- 938 Views
- 1 replies
- 0 kudos
Interactive EDA task in a Job Workflow
I am trying to configure an interactive EDA task as part of a job workflow. I'd like to be able to trigger a workflow, perform some basic analysis then proceed to a subsequent task. I haven't had any success freezing execution. Also, the job workflow...
- 938 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello @cmd0160, Freezing job execution to perform interactive tasks directly within a job workflow is not natively supported in Databricks. The job workflow UI and the notebook UI serve different purposes, and the interactive capabilities you find in...
- 0 kudos
- 3779 Views
- 5 replies
- 1 kudos
DatabricksApiException Error in Microsoft Azure Databricks
I am doing a course on Machine Learning Associate course , at the starting itseld i am getting error while running in Azure Databricks.Can somebody help me in solving this error.
- 3779 Views
- 5 replies
- 1 kudos
- 1 kudos
The error message indicates that Workspace Feature Store has been deprecated in your Azure Databricks workspace. The error occurs because the Feature Store API is no longer supported in your environment.How to Fix It:Check If Your Databricks Workspac...
- 1 kudos
- 1256 Views
- 2 replies
- 1 kudos
deploy, train and monitor AI/ML model in databricks in automated way.
Hi Team, I have my databricks environment where I want to deploy, train and monitor ML model in automated way(github action). How I can do that?
- 1256 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi there @ncparab13,- https://docs.databricks.com/aws/en/dev-tools/bundles/mlops-stacks ,- https://docs.databricks.com/aws/en/machine-learning/mlops/ci-cd-for-ml , - https://docs.databricks.com/aws/en/repos/ci-cd-techniques-with-reposHere are some li...
- 1 kudos
- 3638 Views
- 3 replies
- 1 kudos
Gemini though Mosaic Gateway
I am trying to configure the Gemini Vertex API in Databricks. In simple Python code, everything works fine, which indicates that I have correctly set up the API and credentials. Error message: {"error_code":"INVALID_PARAMETER_VALUE","message":"INVALI...
- 3638 Views
- 3 replies
- 1 kudos
- 1 kudos
With support from a helpful Databricks employee, we found out that the problem was that the `private_key` / `private_key_plaintext` field needs to be the entire JSON object that GCP creates for the service account not just the private key string from...
- 1 kudos
- 4130 Views
- 6 replies
- 2 kudos
Unable to Use VectorAssembler in PySpark 3.5.0 Due to Whitelisting
Hi,I am currently using PySpark version 3.5.0 on my Databricks cluster. Despite setting the required configuration using the command: spark.conf.set("spark.databricks.ml.whitelist", "true"), I am still encountering an issue while trying to use the Ve...
- 4130 Views
- 6 replies
- 2 kudos
- 2 kudos
Glad to hear it works for you now! The ML runtime has variety of preinstalled integrations such as MLflow, which provides ML lifecycle management, MLOps ... etc. Please explore them if you haven't done it already, to establish benefits of the extra
- 2 kudos
- 547 Views
- 1 replies
- 0 kudos
unable to Publish Notebook
Hi,I am unable to publish Notebook from my workspace in community editionIt just give me blank error message
- 547 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Saty1 Publishing a notebook in Databricks Community Edition can sometimes encounter issues due to various reasons, such as browser compatibility, network issues, or limitations within the Community Edition itself. Here are some steps you can take...
- 0 kudos
- 1980 Views
- 0 replies
- 0 kudos
How to transpose spark dataframe using R API?
Hello,I recently discovered the sparklyr package and found it quite useful. After setting up the Spark connection, I can apply dplyr functions to manipulate large tables. However, it seems that any functions outside of dplyr cannot be used on Spark v...
- 1980 Views
- 0 replies
- 0 kudos
- 2492 Views
- 0 replies
- 0 kudos
AutoGluon MLflow integration
I am working on a personalized price package recommendation and implemented an AutoGluon code integrating it with MLflow.The code has been created in a modular fashion to be used by other team members. They just need to pass the data, target column a...
- 2492 Views
- 0 replies
- 0 kudos
- 2091 Views
- 0 replies
- 0 kudos
Pickle/joblib.dump a pre-processing function defined in a notebook
I've built a custom MLFlow model class which I know functions. As part of a given run the model class uses `joblib.dump` to store necessary parameters on the databricks DBFS before logging them as artifacts in the MLFlow run. This works fine when usi...
- 2091 Views
- 0 replies
- 0 kudos
- 2125 Views
- 0 replies
- 0 kudos
AutoML master notebook failing
I have recently been able to run AutoML successfully on a certain dataset. But it has just failed on a second dataset of similar construction, before being able to produce any machine learning training runs or output. The Experiments page says```Mo...
- 2125 Views
- 0 replies
- 0 kudos
- 927 Views
- 2 replies
- 0 kudos
Unable to convert R dataframe to spark dataframe
Hi All, Does anyone knows how to convert R dataframe to spark dataframe to Pandas dataframe? I wanted to get a Pandas dataframe ultimately but I guess I need to convert to spark first. I've been using this sparklyr library but my code did not work. T...
- 927 Views
- 2 replies
- 0 kudos
- 0 kudos
Hello @Paddy_chu, Here's an updated version of the R code: %r library(sparklyr) library(SparkR) sc <- spark_connect(method = "databricks") matched_rdf <- psm_tbl %>% select(c(code_treat, code_control)) %>% data.frame() # Write the R dataframe t...
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
3 -
Access Data
2 -
AccessKeyVault
1 -
ADB
2 -
Airflow
1 -
Amazon
2 -
Apache
1 -
Apache spark
3 -
APILimit
1 -
Artifacts
1 -
Audit
1 -
Autoloader
6 -
Autologging
2 -
Automation
2 -
Automl
32 -
AWS
7 -
Aws databricks
1 -
AWSSagemaker
1 -
Azure
32 -
Azure active directory
1 -
Azure blob storage
2 -
Azure data lake
1 -
Azure Data Lake Storage
3 -
Azure data lake store
1 -
Azure databricks
32 -
Azure event hub
1 -
Azure key vault
1 -
Azure sql database
1 -
Azure Storage
2 -
Azure synapse
1 -
Azure Unity Catalog
1 -
Azure vm
1 -
AzureML
2 -
Bar
1 -
Beta
1 -
Better Way
1 -
BI Integrations
1 -
BI Tool
1 -
Billing and Cost Management
1 -
Blob
1 -
Blog
1 -
Blog Post
1 -
Broadcast variable
1 -
Business Intelligence
1 -
CatalogDDL
1 -
Centralized Model Registry
1 -
Certification
2 -
Certification Badge
1 -
Change
1 -
Change Logs
1 -
Chatgpt
2 -
Check
2 -
Classification Model
1 -
Cloud Storage
1 -
Cluster
10 -
Cluster policy
1 -
Cluster Start
1 -
Cluster Termination
2 -
Clustering
1 -
ClusterMemory
1 -
CNN HOF
1 -
Column names
1 -
Community Edition
1 -
Community Edition Password
1 -
Community Members
1 -
Company Email
1 -
Condition
1 -
Config
1 -
Configure
3 -
Confluent Cloud
1 -
Container
2 -
ContainerServices
1 -
Control Plane
1 -
ControlPlane
1 -
Copy
1 -
Copy into
2 -
CosmosDB
1 -
Courses
2 -
Csv files
1 -
Dashboards
1 -
Data
8 -
Data Engineer Associate
1 -
Data Engineer Certification
1 -
Data Explorer
1 -
Data Ingestion
2 -
Data Ingestion & connectivity
11 -
Data Quality
1 -
Data Quality Checks
1 -
Data Science & Engineering
2 -
databricks
5 -
Databricks Academy
3 -
Databricks Account
1 -
Databricks AutoML
9 -
Databricks Cluster
3 -
Databricks Community
5 -
Databricks community edition
4 -
Databricks connect
1 -
Databricks dbfs
1 -
Databricks Feature Store
1 -
Databricks Job
1 -
Databricks Lakehouse
1 -
Databricks Mlflow
4 -
Databricks Model
2 -
Databricks notebook
10 -
Databricks ODBC
1 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Runtime
9 -
Databricks SQL
8 -
Databricks SQL Permission Problems
1 -
Databricks Terraform
1 -
Databricks Training
2 -
Databricks Unity Catalog
1 -
Databricks V2
1 -
Databricks version
1 -
Databricks Workflow
2 -
Databricks Workflows
1 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksContainer
1 -
DatabricksML
6 -
Dataframe
3 -
DataSharing
1 -
Datatype
1 -
DataVersioning
1 -
Date Column
1 -
Dateadd
1 -
DB Notebook
1 -
DB Runtime
1 -
DBFS
5 -
DBFS Rest Api
1 -
Dbt
1 -
Dbu
1 -
DDL
1 -
DDP
1 -
Dear Community
1 -
DecisionTree
1 -
Deep learning
4 -
Default Location
1 -
Delete
1 -
Delt Lake
4 -
Delta
24 -
Delta lake table
1 -
Delta Live
1 -
Delta Live Tables
6 -
Delta log
1 -
Delta Sharing
3 -
Delta-lake
1 -
Deploy
1 -
DESC
1 -
Details
1 -
Dev
1 -
Devops
1 -
Df
1 -
Different Notebook
1 -
Different Parameters
1 -
DimensionTables
1 -
Directory
3 -
Disable
1 -
Distribution
1 -
DLT
6 -
DLT Pipeline
3 -
Dolly
5 -
Dolly Demo
2 -
Download
2 -
EC2
1 -
Emr
2 -
Ensemble Models
1 -
Environment Variable
1 -
Epoch
1 -
Error handling
1 -
Error log
2 -
Eventhub
1 -
Example
1 -
Experiments
4 -
External Sources
1 -
Extract
1 -
Fact Tables
1 -
Failure
2 -
Feature Lookup
2 -
Feature Store
54 -
Feature Store API
2 -
Feature Store Table
1 -
Feature Table
6 -
Feature Tables
4 -
Features
2 -
FeatureStore
2 -
File Path
2 -
File Size
1 -
Fine Tune Spark Jobs
1 -
Forecasting
2 -
Forgot Password
2 -
Garbage Collection
1 -
Garbage Collection Optimization
1 -
Github
2 -
Github actions
2 -
Github Repo
2 -
Gitlab
1 -
GKE
1 -
Global Init Script
1 -
Global init scripts
4 -
Governance
1 -
Hi
1 -
Horovod
1 -
Html
1 -
Hyperopt
4 -
Hyperparameter Tuning
2 -
Iam
1 -
Image
3 -
Image Data
1 -
Inference Setup Error
1 -
INFORMATION
1 -
Input
1 -
Insert
1 -
Instance Profile
1 -
Int
2 -
Interactive cluster
1 -
Internal error
1 -
Invalid Type Code
1 -
IP
1 -
Ipython
1 -
Ipywidgets
1 -
JDBC Connections
1 -
Jira
1 -
Job
4 -
Job Parameters
1 -
Job Runs
1 -
Join
1 -
Jsonfile
1 -
Kafka consumer
1 -
Key Management
1 -
Kinesis
1 -
Lakehouse
1 -
Large Datasets
1 -
Latest Version
1 -
Learning
1 -
Limit
3 -
LLM
3 -
LLMs
1 -
Local computer
1 -
Local Machine
1 -
Log Model
2 -
Logging
1 -
Login
1 -
Logs
1 -
Long Time
2 -
Low Latency APIs
2 -
LTS ML
3 -
Machine
3 -
Machine Learning
24 -
Machine Learning Associate
1 -
Managed Table
1 -
Max Retries
1 -
Maximum Number
1 -
Medallion Architecture
1 -
Memory
3 -
Metadata
1 -
Metrics
3 -
Microsoft azure
1 -
ML Lifecycle
4 -
ML Model
4 -
ML Practioner
3 -
ML Runtime
1 -
MlFlow
75 -
MLflow API
5 -
MLflow Artifacts
2 -
MLflow Experiment
6 -
MLflow Experiments
3 -
Mlflow Model
10 -
Mlflow registry
3 -
Mlflow Run
1 -
Mlflow Server
5 -
MLFlow Tracking Server
3 -
MLModels
2 -
Model Deployment
4 -
Model Lifecycle
6 -
Model Loading
2 -
Model Monitoring
1 -
Model registry
5 -
Model Serving
6 -
Model Serving Cluster
2 -
Model Serving REST API
6 -
Model Training
2 -
Model Tuning
1 -
Models
8 -
Module
3 -
Modulenotfounderror
1 -
MongoDB
1 -
Mount Point
1 -
Mounts
1 -
Multi
1 -
Multiline
1 -
Multiple users
1 -
Nested
1 -
New Feature
1 -
New Features
1 -
New Workspace
1 -
Nlp
3 -
Note
1 -
Notebook
6 -
Notification
2 -
Object
3 -
Onboarding
1 -
Online Feature Store Table
1 -
OOM Error
1 -
Open Source MLflow
4 -
Optimization
2 -
Optimize Command
1 -
OSS
3 -
Overwatch
1 -
Overwrite
2 -
Packages
2 -
Pandas udf
4 -
Pandas_udf
1 -
Parallel
1 -
Parallel processing
1 -
Parallel Runs
1 -
Parallelism
1 -
Parameter
2 -
PARAMETER VALUE
2 -
Partner Academy
1 -
Pending State
2 -
Performance Tuning
1 -
Photon Engine
1 -
Pickle
1 -
Pickle Files
2 -
Pip
2 -
Points
1 -
Possible
1 -
Postgres
1 -
Pricing
2 -
Primary Key
1 -
Primary Key Constraint
1 -
Progress bar
2 -
Proven Practices
2 -
Public
2 -
Pymc3 Models
2 -
PyPI
1 -
Pyspark
6 -
Python
21 -
Python API
1 -
Python Code
1 -
Python Function
3 -
Python Libraries
1 -
Python Packages
1 -
Python Project
1 -
Pytorch
3 -
Reading-excel
2 -
Redis
2 -
Region
1 -
Remote RPC Client
1 -
RESTAPI
1 -
Result
1 -
Runtime update
1 -
Sagemaker
1 -
Salesforce
1 -
SAP
1 -
Scalability
1 -
Scalable Machine
2 -
Schema evolution
1 -
Script
1 -
Search
1 -
Security
2 -
Security Exception
1 -
Self Service Notebooks
1 -
Server
1 -
Serverless
1 -
Serving
1 -
Shap
2 -
Size
1 -
Sklearn
1 -
Slow
1 -
Small Scale Experimentation
1 -
Source Table
1 -
Spark
13 -
Spark config
1 -
Spark connector
1 -
Spark Error
1 -
Spark MLlib
2 -
Spark Pandas Api
1 -
Spark ui
1 -
Spark Version
2 -
Spark-submit
1 -
SparkML Models
2 -
Sparknlp
3 -
Spot
1 -
SQL
19 -
SQL Editor
1 -
SQL Queries
1 -
SQL Visualizations
1 -
Stage failure
2 -
Storage
3 -
Stream
2 -
Stream Data
1 -
Structtype
1 -
Structured streaming
2 -
Study Material
1 -
Summit23
2 -
Support
1 -
Support Team
1 -
Synapse
1 -
Synapse ML
1 -
Table
4 -
Table access control
1 -
Tableau
1 -
Task
1 -
Temporary View
1 -
Tensor flow
1 -
Test
1 -
Timeseries
1 -
Timestamps
1 -
TODAY
1 -
Training
6 -
Transaction Log
1 -
Trying
1 -
Tuning
2 -
UAT
1 -
Ui
1 -
Unexpected Error
1 -
Unity Catalog
12 -
Use Case
2 -
Use cases
1 -
Uuid
1 -
Validate ML Model
2 -
Values
1 -
Variable
1 -
Vector
1 -
Versioncontrol
1 -
Visualization
2 -
Web App Azure Databricks
1 -
Weekly Release Notes
2 -
Whl
1 -
Worker Nodes
1 -
Workflow
2 -
Workflow Jobs
1 -
Workspace
2 -
Write
1 -
Writing
1 -
Z-ordering
1 -
Zorder
1
- « Previous
- Next »
User | Count |
---|---|
89 | |
39 | |
38 | |
25 | |
25 |