cancel
Showing results for 
Search instead for 
Did you mean: 
Support FAQs
Find answers to common questions and troubleshoot issues with Databricks support FAQs. Access helpful resources, tips, and solutions to resolve technical challenges and enhance your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Knowledge Base Articles

Sujitha
by Databricks Employee
  • 3552 Views
  • 0 comments
  • 0 kudos

Help Center login changes (August 3rd, 2024)

The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Help Center Single Sign-On process.  I am a Databricks customer. Does anything change how I access Databricks plat...

  • 3552 Views
  • 0 comments
  • 0 kudos
Sujitha
by Databricks Employee
  • 1503 Views
  • 0 comments
  • 1 kudos

Databricks Partner Portal SSO (August 3rd, 2024)

The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Help Center Single Sign-On process.  I am a Databricks partner. Does anything change how I access Databricks Platf...

  • 1503 Views
  • 0 comments
  • 1 kudos
Sujitha
by Databricks Employee
  • 3510 Views
  • 0 comments
  • 1 kudos

Databricks Community SSO (August 3rd, 2024)

The purpose of this FAQ document is to provide users and partners with answers to common queries or concerns related to the Databricks Community Single Sign-On process. Why am I being asked to update my primary and recovery email? Primary email: This...

  • 3510 Views
  • 0 comments
  • 1 kudos
Adam_Pavlacka
by Databricks Employee
  • 1337 Views
  • 0 comments
  • 0 kudos

What is the best way to train a DeepLearning model to ensure we do not encounter out of memory (OOM) errors?

You should use distributed training. By distributing the training workload among GPUs or worker nodes, you can optimize resource utilization and reduce the likelihood of ConnectionException errors and out of memory (OOM) issues. A good option for di...

  • 1337 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1464 Views
  • 0 comments
  • 0 kudos

What algorithms does AutoML support?

AutoML supports binary/multiple classification, regression, and forecasting models. For more details, please review the How Databricks AutoML works (AWS | Azure | GCP) documentation.

  • 1464 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1455 Views
  • 0 comments
  • 0 kudos

When deploying and running several jobs at the same time we get the error: REQUEST_LIMIT_EXCEEDED: Your request was rejected since cluster creation, start and upsize requests within your organization have exceeded the rate limit of 100 nodes per minute. Please retry your request later, or choose a larger node type instead.

There is a rate limit of 100 notes per minute. To ensure you do not exceed this limit, you should make adjustments to the deployment and execution of your ML jobs. Distribute recurring workflows evenly over the planned time period To ensure complian...

  • 1455 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 2242 Views
  • 0 comments
  • 0 kudos

My job is failing with a "ModuleNotFoundError: No module named 'tkinter'" error during model training.

To install the tkinter package, you can run the following shell command in a notebook: %sh sudo apt-get install python3-tk. To install the package automatically on every cluster start, you can add the command to a cluster-scoped init script.

  • 2242 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1783 Views
  • 0 comments
  • 0 kudos

When using mlflow.spark.log_model() I get a "failed to save spark model via mlflowdbfs" error.

The trace indicates a statusCode=401 error caused by com.databricks.mlflowdbfs.MlflowHttpException. You need to disable mlflowdbfs in the environment variable before executing log_model().  Example code:  import osos.environ["DISABLE_MLFLOWDBFS"] = ...

  • 1783 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 2890 Views
  • 0 comments
  • 0 kudos

While executing the regression RECIPE with Jinja2 we get TemplateNotFound errors with recipe_dag_template.html or base.html

The errors occur for resources/recipe_dag_template.html with the inspect() method and for base.html with the remaining methods. To resolve TemplateNotFound errors and ensure successful display of results while executing the regression RECIPE with Ji...

  • 2890 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1759 Views
  • 0 comments
  • 0 kudos

Unable to load the xgboost model after logging via mlflow. You get a "ModuleNotFoundError: No module named 'ml'" error message.

When using MLflow to log a model, be aware of warnings like the one below: WARNING mlflow.utils.requirements_utils: The following packages were not found in the public PyPI package index as of 2022-12-21; if these packages are not present in the pub...

  • 1759 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1659 Views
  • 0 comments
  • 0 kudos

While using the model serving API we get a "BAD_REQUEST: Encountered an unexpected error while evaluating the model. Verify that the serialized input Dataframe is compatible with the model for inference” error message.

You should review the following items: Check the request origin: Ensure you're making the request from the intended source. Verify if the request is originating from the expected location. Payload mismatch: Confirm that the payload sent in the POST ...

  • 1659 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1900 Views
  • 0 comments
  • 0 kudos

When deploying a serverless model, we get "Signal 9" when using small/medium clusters or "Workspace quota exhausted" when using large clusters.

Identifying the root cause of worker termination involves analyzing signals that can provide insights into the issue. Typically, these problems are associated with memory pressure, but understanding the specific events, workload type, and workload s...

  • 1900 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 2916 Views
  • 0 comments
  • 1 kudos

I am running MLflow with a service principal and getting a "RestException: RESOURCE_DOES_NOT_EXIST: Node ID xxxxxxxxxxxx does not exist" error message.

If you view a stack trace and it looks similar to the following:  RestException Traceback (most recent call last)File <command-XXXXXXXXXXXX>:72 mlflow.sklearn.autolog()...File /databricks/python/lib/python3.9/site-packages/mlflow/tracking/fluent.py:...

  • 2916 Views
  • 0 comments
  • 1 kudos
Adam_Pavlacka
by Databricks Employee
  • 1423 Views
  • 0 comments
  • 0 kudos

When attempting to create a serving endpoint for a custom ONNX model I get a “Container creation failed" error message.

A Container creation failed error message usually means there are missing dependencies. Check the pip requirements to see if there are any missing dependencies. Adding the required dependencies should resolve the error.

  • 1423 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1890 Views
  • 0 comments
  • 0 kudos

How do I delete rows in the feature store?

Feature store tables are materialized tables that use delta tables underneath. To delete data from the feature store table, you have to run a DELETE command on the rows in the underlying delta table based on the partition column.

  • 1890 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1458 Views
  • 0 comments
  • 0 kudos

How can I access information offline about existing features in the feature store, like feature engineering logic?

The Databricks feature store provides a catalog that enables data scientists to search for existing features in the offline feature store. The feature store UI offers a searchable interface, allowing you to discover features and view the code used fo...

  • 1458 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1556 Views
  • 0 comments
  • 0 kudos

How can I resolve CUDA OOM issues while performing batch inference in notebooks using ML models?

The potential root cause could be high GPU utilization while running a live experiment. This can be validated both by using the Spark UI and by using the Nvidia -smi command. If a single GPU is explicitly used, this might cause an overload and hence...

  • 1556 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Databricks Employee
  • 1431 Views
  • 0 comments
  • 0 kudos

What is the definition of "scoring" in relation to the default scoring requests limit of 200 QPS? When batching 10 data points to be scored in a single request, is it counted as a single scoring request or as 10?

It is treated as a single request when batching 10 data points. This is because the batching process involves making a single endpoint request to score multiple data points simultaneously.

  • 1431 Views
  • 0 comments
  • 0 kudos
Labels
Top Contributors