- 2181 Views
- 1 replies
- 0 kudos
Resolved! Visualizations not displaying
With a copy of notebook https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/open-source-nlp/03.0.SparkNLP_Pretrained_Models.ipynb imported into Databricks, there's a lovely visualization created by the cell that you can locate by searching...
- 2181 Views
- 1 replies
- 0 kudos
- 0 kudos
Here's a solution: use a parameter (here, `return_html = True`) to get an HTML object back, and then call `displayHTML` to actually display the object.from sparknlp_display import NerVisualizer visualiser = NerVisualizer() for i in text_list: ...
- 0 kudos
- 3303 Views
- 2 replies
- 0 kudos
- 3303 Views
- 2 replies
- 0 kudos
- 0 kudos
How does COPY_INTO work with table restore?I made some tests, and the restore method does NOT restore the key-store values of the target at the specific version, which means that the data that came after the chosen version cannot be inserted (unless ...
- 0 kudos
- 3898 Views
- 0 replies
- 0 kudos
Served model endpoint creation failed and not able to see build log
Dear Databricks Support Team,I am currently encountering an issue while attempting to serve a model using your platform. Whenever I initiate the model serving process, it fails, and I am unable to successfully deploy my model.Additionally, I am facin...
- 3898 Views
- 0 replies
- 0 kudos
- 4718 Views
- 0 replies
- 0 kudos
workflow job
Hi ,When I create a job of a machine learning model and run the job I see that the cell outputs do not get updated. The model variables would have updated, however. I also need to keep the notebook updated with cell outputs always when I run the job...
- 4718 Views
- 0 replies
- 0 kudos
- 3498 Views
- 4 replies
- 4 kudos
- 3498 Views
- 4 replies
- 4 kudos
- 4 kudos
Data science is a multidisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from data. It encompasses the entire data lifecycle, from data acquisition to data exploration, modeling, and...
- 4 kudos
- 8442 Views
- 0 replies
- 0 kudos
Runtime issue
Hello,I am working on a machine learning project. The dataset I am using has more than 5000000 rows. I am using PySpark, and the attached screenshot is the block I used RandomForestRegressor to train the model.It worked even though it took a pretty l...
- 8442 Views
- 0 replies
- 0 kudos
- 2122 Views
- 1 replies
- 0 kudos
Bootstrap timeout on instance creation
I am getting the following error...{ "reason": { "code": "BOOTSTRAP_TIMEOUT", "parameters": { "databricks_error_message": "[id: InstanceId(i-0e552e85c37c9da2d), status: INSTANCE_INITIALIZING, workerEnvId:WorkerEnvId(workerenv-12661843...
- 2122 Views
- 1 replies
- 0 kudos
- 0 kudos
Hello, Thanks for contacting Databricks Support. From the error message: [Bootstrap Event] Can reach databricks-prod-artifacts-us-east-1.s3.amazonaws.com: [FAILED]. It suggests an issue with reaching a Databricks-related AWS S3 bucket from your env...
- 0 kudos
- 1626 Views
- 0 replies
- 0 kudos
ML Experiment - AttributeError: 'NoneType' object has no attribute 'url'
When running a simple ML experiment this error appears:no Idea what this could mean, has anyone else encountered this error or know what it is?
- 1626 Views
- 0 replies
- 0 kudos
- 11302 Views
- 3 replies
- 5 kudos
Why are my MLflow results not showing up in the Experiment UI view?
The issue:None of my MLflow experiment results show up in the Experiment UI. Context:I encountered this issue recently, despite having successfully used the MLFlow UI for the past few weeks.Note: I can still access the experiment runs in a notebook, ...
- 11302 Views
- 3 replies
- 5 kudos
- 5 kudos
Hi @Robbie, Thank you for posting your question in the Databricks community. Ensure that you are logging runs correctly using mlflow.start_run(), and that all relevant metrics and artifacts are being logged with the correct parameters. Please check.
- 5 kudos
- 1193 Views
- 0 replies
- 0 kudos
Copying Models Between Metastores
Hello,We need to promote models to different environments in different regions, they exist in Unity Catalog.We are setting up DataBricks metastores across different regions.We wanted to follow this approach to copy models via UC https://docs.gcp.data...
- 1193 Views
- 0 replies
- 0 kudos
- 5842 Views
- 1 replies
- 0 kudos
Resolved! FeatureEngineeringClient loses timestamp_keys after write_table
I am trying to use the FeatureEngineeringClient to setup a feature store table with a time series component. However, after initiating the table with a time series column, the key exists, but the key is removed after adding data to the table. Therefo...
- 5842 Views
- 1 replies
- 0 kudos
- 797 Views
- 0 replies
- 0 kudos
Buy monzo account
Buy monzo accountDo you want to buy monzo account? Our store is the best place where you can buy monzo accounts. Only fully verified monzo accounts on our store.Buy monzo account
- 797 Views
- 0 replies
- 0 kudos
- 1484 Views
- 0 replies
- 0 kudos
Buy Advcash account
Buy Advcash accountDo you want to buy advcash account? Our store is the best place where you can buy advcash accounts. Only fully verified advcash accounts on our store.Buy Advcash account
- 1484 Views
- 0 replies
- 0 kudos
- 1847 Views
- 0 replies
- 0 kudos
Registered models - edit creator
Hi!I'm using MLFlow API to log and load models in Databricks. When we created a dedicated workspace for models registry, one person created multiple models, and for some reason now all models are logged with this person as the creator.This person no ...
- 1847 Views
- 0 replies
- 0 kudos
- 6847 Views
- 2 replies
- 4 kudos
Dynamic variable and multi-instance tasks.
1. How to pass dynamic variable values like "sysdate" to a job parameters, so that it will automatically take the updated values on the fly.2. How to run multiple instance of set of tasksin a job (for different parameters). For e.g the same pipeline ...
- 6847 Views
- 2 replies
- 4 kudos
- 4 kudos
Hey Maverick1,Did you find a solution for your second question?I have also same approach. In databricks, it has workflows, job clusters, tasks etc.I'm trying to do creating one job cluster with one configuration or specification which has a workflow ...
- 4 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
3 -
Access Data
2 -
AccessKeyVault
1 -
ADB
2 -
Airflow
1 -
Amazon
2 -
Apache
1 -
Apache spark
3 -
APILimit
1 -
Artifacts
1 -
Audit
1 -
Autoloader
6 -
Autologging
2 -
Automation
2 -
Automl
32 -
AWS
7 -
Aws databricks
1 -
AWSSagemaker
1 -
Azure
32 -
Azure active directory
1 -
Azure blob storage
2 -
Azure data lake
1 -
Azure Data Lake Storage
3 -
Azure data lake store
1 -
Azure databricks
32 -
Azure event hub
1 -
Azure key vault
1 -
Azure sql database
1 -
Azure Storage
2 -
Azure synapse
1 -
Azure Unity Catalog
1 -
Azure vm
1 -
AzureML
2 -
Bar
1 -
Beta
1 -
Better Way
1 -
BI Integrations
1 -
BI Tool
1 -
Billing and Cost Management
1 -
Blob
1 -
Blog
1 -
Blog Post
1 -
Broadcast variable
1 -
Business Intelligence
1 -
CatalogDDL
1 -
Centralized Model Registry
1 -
Certification
2 -
Certification Badge
1 -
Change
1 -
Change Logs
1 -
Chatgpt
2 -
Check
2 -
Classification Model
1 -
Cloud Storage
1 -
Cluster
10 -
Cluster policy
1 -
Cluster Start
1 -
Cluster Termination
2 -
Clustering
1 -
ClusterMemory
1 -
CNN HOF
1 -
Column names
1 -
Community Edition
1 -
Community Edition Password
1 -
Community Members
1 -
Company Email
1 -
Condition
1 -
Config
1 -
Configure
3 -
Confluent Cloud
1 -
Container
2 -
ContainerServices
1 -
Control Plane
1 -
ControlPlane
1 -
Copy
1 -
Copy into
2 -
CosmosDB
1 -
Courses
2 -
Csv files
1 -
Dashboards
1 -
Data
8 -
Data Engineer Associate
1 -
Data Engineer Certification
1 -
Data Explorer
1 -
Data Ingestion
2 -
Data Ingestion & connectivity
11 -
Data Quality
1 -
Data Quality Checks
1 -
Data Science & Engineering
2 -
databricks
5 -
Databricks Academy
3 -
Databricks Account
1 -
Databricks AutoML
9 -
Databricks Cluster
3 -
Databricks Community
5 -
Databricks community edition
4 -
Databricks connect
1 -
Databricks dbfs
1 -
Databricks Feature Store
1 -
Databricks Job
1 -
Databricks Lakehouse
1 -
Databricks Mlflow
4 -
Databricks Model
2 -
Databricks notebook
10 -
Databricks ODBC
1 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Runtime
9 -
Databricks SQL
8 -
Databricks SQL Permission Problems
1 -
Databricks Terraform
1 -
Databricks Training
2 -
Databricks Unity Catalog
1 -
Databricks V2
1 -
Databricks version
1 -
Databricks Workflow
2 -
Databricks Workflows
1 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksContainer
1 -
DatabricksML
6 -
Dataframe
3 -
DataSharing
1 -
Datatype
1 -
DataVersioning
1 -
Date Column
1 -
Dateadd
1 -
DB Notebook
1 -
DB Runtime
1 -
DBFS
5 -
DBFS Rest Api
1 -
Dbt
1 -
Dbu
1 -
DDL
1 -
DDP
1 -
Dear Community
1 -
DecisionTree
1 -
Deep learning
4 -
Default Location
1 -
Delete
1 -
Delt Lake
4 -
Delta
24 -
Delta lake table
1 -
Delta Live
1 -
Delta Live Tables
6 -
Delta log
1 -
Delta Sharing
3 -
Delta-lake
1 -
Deploy
1 -
DESC
1 -
Details
1 -
Dev
1 -
Devops
1 -
Df
1 -
Different Notebook
1 -
Different Parameters
1 -
DimensionTables
1 -
Directory
3 -
Disable
1 -
Distribution
1 -
DLT
6 -
DLT Pipeline
3 -
Dolly
5 -
Dolly Demo
2 -
Download
2 -
EC2
1 -
Emr
2 -
Ensemble Models
1 -
Environment Variable
1 -
Epoch
1 -
Error handling
1 -
Error log
2 -
Eventhub
1 -
Example
1 -
Experiments
4 -
External Sources
1 -
Extract
1 -
Fact Tables
1 -
Failure
2 -
Feature Lookup
2 -
Feature Store
53 -
Feature Store API
2 -
Feature Store Table
1 -
Feature Table
6 -
Feature Tables
4 -
Features
2 -
FeatureStore
2 -
File Path
2 -
File Size
1 -
Fine Tune Spark Jobs
1 -
Forecasting
2 -
Forgot Password
2 -
Garbage Collection
1 -
Garbage Collection Optimization
1 -
Github
2 -
Github actions
2 -
Github Repo
2 -
Gitlab
1 -
GKE
1 -
Global Init Script
1 -
Global init scripts
4 -
Governance
1 -
Hi
1 -
Horovod
1 -
Html
1 -
Hyperopt
4 -
Hyperparameter Tuning
2 -
Iam
1 -
Image
3 -
Image Data
1 -
Inference Setup Error
1 -
INFORMATION
1 -
Input
1 -
Insert
1 -
Instance Profile
1 -
Int
2 -
Interactive cluster
1 -
Internal error
1 -
Invalid Type Code
1 -
IP
1 -
Ipython
1 -
Ipywidgets
1 -
JDBC Connections
1 -
Jira
1 -
Job
4 -
Job Parameters
1 -
Job Runs
1 -
Join
1 -
Jsonfile
1 -
Kafka consumer
1 -
Key Management
1 -
Kinesis
1 -
Lakehouse
1 -
Large Datasets
1 -
Latest Version
1 -
Learning
1 -
Limit
3 -
LLM
3 -
LLMs
1 -
Local computer
1 -
Local Machine
1 -
Log Model
2 -
Logging
1 -
Login
1 -
Logs
1 -
Long Time
2 -
Low Latency APIs
2 -
LTS ML
3 -
Machine
3 -
Machine Learning
24 -
Machine Learning Associate
1 -
Managed Table
1 -
Max Retries
1 -
Maximum Number
1 -
Medallion Architecture
1 -
Memory
3 -
Metadata
1 -
Metrics
3 -
Microsoft azure
1 -
ML Lifecycle
4 -
ML Model
4 -
ML Practioner
3 -
ML Runtime
1 -
MlFlow
75 -
MLflow API
5 -
MLflow Artifacts
2 -
MLflow Experiment
6 -
MLflow Experiments
3 -
Mlflow Model
10 -
Mlflow registry
3 -
Mlflow Run
1 -
Mlflow Server
5 -
MLFlow Tracking Server
3 -
MLModels
2 -
Model Deployment
4 -
Model Lifecycle
6 -
Model Loading
2 -
Model Monitoring
1 -
Model registry
5 -
Model Serving
4 -
Model Serving Cluster
2 -
Model Serving REST API
6 -
Model Training
2 -
Model Tuning
1 -
Models
8 -
Module
3 -
Modulenotfounderror
1 -
MongoDB
1 -
Mount Point
1 -
Mounts
1 -
Multi
1 -
Multiline
1 -
Multiple users
1 -
Nested
1 -
New Feature
1 -
New Features
1 -
New Workspace
1 -
Nlp
3 -
Note
1 -
Notebook
6 -
Notification
2 -
Object
3 -
Onboarding
1 -
Online Feature Store Table
1 -
OOM Error
1 -
Open Source MLflow
4 -
Optimization
2 -
Optimize Command
1 -
OSS
3 -
Overwatch
1 -
Overwrite
2 -
Packages
2 -
Pandas udf
4 -
Pandas_udf
1 -
Parallel
1 -
Parallel processing
1 -
Parallel Runs
1 -
Parallelism
1 -
Parameter
2 -
PARAMETER VALUE
2 -
Partner Academy
1 -
Pending State
2 -
Performance Tuning
1 -
Photon Engine
1 -
Pickle
1 -
Pickle Files
2 -
Pip
2 -
Points
1 -
Possible
1 -
Postgres
1 -
Pricing
2 -
Primary Key
1 -
Primary Key Constraint
1 -
Progress bar
2 -
Proven Practices
2 -
Public
2 -
Pymc3 Models
2 -
PyPI
1 -
Pyspark
6 -
Python
21 -
Python API
1 -
Python Code
1 -
Python Function
3 -
Python Libraries
1 -
Python Packages
1 -
Python Project
1 -
Pytorch
3 -
Reading-excel
2 -
Redis
2 -
Region
1 -
Remote RPC Client
1 -
RESTAPI
1 -
Result
1 -
Runtime update
1 -
Sagemaker
1 -
Salesforce
1 -
SAP
1 -
Scalability
1 -
Scalable Machine
2 -
Schema evolution
1 -
Script
1 -
Search
1 -
Security
2 -
Security Exception
1 -
Self Service Notebooks
1 -
Server
1 -
Serverless
1 -
Serving
1 -
Shap
2 -
Size
1 -
Sklearn
1 -
Slow
1 -
Small Scale Experimentation
1 -
Source Table
1 -
Spark
13 -
Spark config
1 -
Spark connector
1 -
Spark Error
1 -
Spark MLlib
2 -
Spark Pandas Api
1 -
Spark ui
1 -
Spark Version
2 -
Spark-submit
1 -
SparkML Models
2 -
Sparknlp
3 -
Spot
1 -
SQL
19 -
SQL Editor
1 -
SQL Queries
1 -
SQL Visualizations
1 -
Stage failure
2 -
Storage
3 -
Stream
2 -
Stream Data
1 -
Structtype
1 -
Structured streaming
2 -
Study Material
1 -
Summit23
2 -
Support
1 -
Support Team
1 -
Synapse
1 -
Synapse ML
1 -
Table
4 -
Table access control
1 -
Tableau
1 -
Task
1 -
Temporary View
1 -
Tensor flow
1 -
Test
1 -
Timeseries
1 -
Timestamps
1 -
TODAY
1 -
Training
6 -
Transaction Log
1 -
Trying
1 -
Tuning
2 -
UAT
1 -
Ui
1 -
Unexpected Error
1 -
Unity Catalog
12 -
Use Case
2 -
Use cases
1 -
Uuid
1 -
Validate ML Model
2 -
Values
1 -
Variable
1 -
Vector
1 -
Versioncontrol
1 -
Visualization
2 -
Web App Azure Databricks
1 -
Weekly Release Notes
2 -
Whl
1 -
Worker Nodes
1 -
Workflow
2 -
Workflow Jobs
1 -
Workspace
2 -
Write
1 -
Writing
1 -
Z-ordering
1 -
Zorder
1
- « Previous
- Next »
User | Count |
---|---|
89 | |
39 | |
37 | |
25 | |
25 |