- 4013 Views
- 0 comments
- 0 kudos
The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Help Center Single Sign-On process.
I am a Databricks customer. Does anything change how I access Databricks plat...
- 4013 Views
- 0 comments
- 0 kudos
- 2077 Views
- 0 comments
- 1 kudos
The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Help Center Single Sign-On process.
I am a Databricks partner. Does anything change how I access Databricks Platf...
- 2077 Views
- 0 comments
- 1 kudos
- 7685 Views
- 0 comments
- 2 kudos
The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Community Single Sign-On process.
Why am I being asked to update my primary and recovery email?
Primary email: This...
- 7685 Views
- 0 comments
- 2 kudos
- 1986 Views
- 0 comments
- 1 kudos
If you are getting No Module named textdistance errors, you need to install the textdistance library.
This can be done at the cluster level or the session level.
- 1986 Views
- 0 comments
- 1 kudos
- 1571 Views
- 0 comments
- 0 kudos
You should use distributed training.
By distributing the training workload among GPUs or worker nodes, you can optimize resource utilization and reduce the likelihood of ConnectionException errors and out of memory (OOM) issues.
A good option for di...
- 1571 Views
- 0 comments
- 0 kudos
- 1725 Views
- 0 comments
- 0 kudos
AutoML supports binary/multiple classification, regression, and forecasting models.
For more details, please review the How Databricks AutoML works (AWS | Azure | GCP) documentation.
- 1725 Views
- 0 comments
- 0 kudos
- 1667 Views
- 0 comments
- 0 kudos
There is a rate limit of 100 notes per minute. To ensure you do not exceed this limit, you should make adjustments to the deployment and execution of your ML jobs.
Distribute recurring workflows evenly over the planned time period
To ensure complian...
- 1667 Views
- 0 comments
- 0 kudos
- 2617 Views
- 0 comments
- 0 kudos
To install the tkinter package, you can run the following shell command in a notebook:
%sh sudo apt-get install python3-tk.
To install the package automatically on every cluster start, you can add the command to a cluster-scoped init script.
- 2617 Views
- 0 comments
- 0 kudos
- 2018 Views
- 0 comments
- 0 kudos
The trace indicates a statusCode=401 error caused by com.databricks.mlflowdbfs.MlflowHttpException.
You need to disable mlflowdbfs in the environment variable before executing log_model().
Example code:
import osos.environ["DISABLE_MLFLOWDBFS"] = ...
- 2018 Views
- 0 comments
- 0 kudos
- 3167 Views
- 0 comments
- 0 kudos
The errors occur for resources/recipe_dag_template.html with the inspect() method and for base.html with the remaining methods.
To resolve TemplateNotFound errors and ensure successful display of results while executing the regression RECIPE with Ji...
- 3167 Views
- 0 comments
- 0 kudos
- 2022 Views
- 0 comments
- 0 kudos
When using MLflow to log a model, be aware of warnings like the one below:
WARNING mlflow.utils.requirements_utils: The following packages were not found in the public PyPI package index as of 2022-12-21; if these packages are not present in the pub...
- 2022 Views
- 0 comments
- 0 kudos
- 2035 Views
- 0 comments
- 0 kudos
You should review the following items:
Check the request origin:
Ensure you're making the request from the intended source. Verify if the request is originating from the expected location.
Payload mismatch:
Confirm that the payload sent in the POST ...
- 2035 Views
- 0 comments
- 0 kudos
- 2134 Views
- 0 comments
- 0 kudos
Identifying the root cause of worker termination involves analyzing signals that can provide insights into the issue. Typically, these problems are associated with memory pressure, but understanding the specific events, workload type, and workload s...
- 2134 Views
- 0 comments
- 0 kudos
- 3633 Views
- 0 comments
- 1 kudos
If you view a stack trace and it looks similar to the following:
RestException Traceback (most recent call last)File <command-XXXXXXXXXXXX>:72 mlflow.sklearn.autolog()...File /databricks/python/lib/python3.9/site-packages/mlflow/tracking/fluent.py:...
- 3633 Views
- 0 comments
- 1 kudos
- 1614 Views
- 0 comments
- 0 kudos
A Container creation failed error message usually means there are missing dependencies.
Check the pip requirements to see if there are any missing dependencies. Adding the required dependencies should resolve the error.
- 1614 Views
- 0 comments
- 0 kudos
- 2179 Views
- 0 comments
- 0 kudos
Feature store tables are materialized tables that use delta tables underneath.
To delete data from the feature store table, you have to run a DELETE command on the rows in the underlying delta table based on the partition column.
- 2179 Views
- 0 comments
- 0 kudos
- 1897 Views
- 0 comments
- 0 kudos
The Databricks feature store provides a catalog that enables data scientists to search for existing features in the offline feature store. The feature store UI offers a searchable interface, allowing you to discover features and view the code used fo...
- 1897 Views
- 0 comments
- 0 kudos
- 1814 Views
- 0 comments
- 0 kudos
The potential root cause could be high GPU utilization while running a live experiment. This can be validated both by using the Spark UI and by using the Nvidia -smi command.
If a single GPU is explicitly used, this might cause an overload and hence...
- 1814 Views
- 0 comments
- 0 kudos
- 1649 Views
- 0 comments
- 0 kudos
It is treated as a single request when batching 10 data points. This is because the batching process involves making a single endpoint request to score multiple data points simultaneously.
- 1649 Views
- 0 comments
- 0 kudos