cancel
Showing results for 
Search instead for 
Did you mean: 
Support FAQs
cancel
Showing results for 
Search instead for 
Did you mean: 

Knowledge Base Articles

Adam_Pavlacka
by Community Manager
  • 203 Views
  • 0 comments
  • 0 kudos

What is the best way to train a DeepLearning model to ensure we do not encounter out of memory (OOM) errors?

You should use distributed training. By distributing the training workload among GPUs or worker nodes, you can optimize resource utilization and reduce the likelihood of ConnectionException errors and out of memory (OOM) issues. A good option for di...

  • 203 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 187 Views
  • 0 comments
  • 0 kudos

What algorithms does AutoML support?

AutoML supports binary/multiple classification, regression, and forecasting models. For more details, please review the How Databricks AutoML works (AWS | Azure | GCP) documentation.

  • 187 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 143 Views
  • 0 comments
  • 0 kudos

When deploying and running several jobs at the same time we get the error: REQUEST_LIMIT_EXCEEDED: Your request was rejected since cluster creation, start and upsize requests within your organization have exceeded the rate limit of 100 nodes per minute. Please retry your request later, or choose a larger node type instead.

There is a rate limit of 100 notes per minute. To ensure you do not exceed this limit, you should make adjustments to the deployment and execution of your ML jobs. Distribute recurring workflows evenly over the planned time period To ensure complian...

  • 143 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 277 Views
  • 0 comments
  • 0 kudos

When using mlflow.spark.log_model() I get a "failed to save spark model via mlflowdbfs" error.

The trace indicates a statusCode=401 error caused by com.databricks.mlflowdbfs.MlflowHttpException. You need to disable mlflowdbfs in the environment variable before executing log_model().  Example code:  import osos.environ["DISABLE_MLFLOWDBFS"] = ...

  • 277 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 241 Views
  • 0 comments
  • 0 kudos

While executing the regression RECIPE with Jinja2 we get TemplateNotFound errors with recipe_dag_template.html or base.html

The errors occur for resources/recipe_dag_template.html with the inspect() method and for base.html with the remaining methods. To resolve TemplateNotFound errors and ensure successful display of results while executing the regression RECIPE with Ji...

  • 241 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 225 Views
  • 0 comments
  • 0 kudos

Unable to load the xgboost model after logging via mlflow. You get a "ModuleNotFoundError: No module named 'ml'" error message.

When using MLflow to log a model, be aware of warnings like the one below: WARNING mlflow.utils.requirements_utils: The following packages were not found in the public PyPI package index as of 2022-12-21; if these packages are not present in the pub...

  • 225 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 234 Views
  • 0 comments
  • 0 kudos

While using the model serving API we get a "BAD_REQUEST: Encountered an unexpected error while evaluating the model. Verify that the serialized input Dataframe is compatible with the model for inference” error message.

You should review the following items: Check the request origin: Ensure you're making the request from the intended source. Verify if the request is originating from the expected location. Payload mismatch: Confirm that the payload sent in the POST ...

  • 234 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 225 Views
  • 0 comments
  • 0 kudos

When deploying a serverless model, we get "Signal 9" when using small/medium clusters or "Workspace quota exhausted" when using large clusters.

Identifying the root cause of worker termination involves analyzing signals that can provide insights into the issue. Typically, these problems are associated with memory pressure, but understanding the specific events, workload type, and workload s...

  • 225 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 282 Views
  • 0 comments
  • 0 kudos

I am running MLflow with a service principal and getting a "RestException: RESOURCE_DOES_NOT_EXIST: Node ID xxxxxxxxxxxx does not exist" error message.

If you view a stack trace and it looks similar to the following:  RestException Traceback (most recent call last)File <command-XXXXXXXXXXXX>:72 mlflow.sklearn.autolog()...File /databricks/python/lib/python3.9/site-packages/mlflow/tracking/fluent.py:...

  • 282 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 210 Views
  • 0 comments
  • 0 kudos

How do I delete rows in the feature store?

Feature store tables are materialized tables that use delta tables underneath. To delete data from the feature store table, you have to run a DELETE command on the rows in the underlying delta table based on the partition column.

  • 210 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 181 Views
  • 0 comments
  • 0 kudos

How can I access information offline about existing features in the feature store, like feature engineering logic?

The Databricks feature store provides a catalog that enables data scientists to search for existing features in the offline feature store. The feature store UI offers a searchable interface, allowing you to discover features and view the code used fo...

  • 181 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 209 Views
  • 0 comments
  • 0 kudos

How can I resolve CUDA OOM issues while performing batch inference in notebooks using ML models?

The potential root cause could be high GPU utilization while running a live experiment. This can be validated both by using the Spark UI and by using the Nvidia -smi command. If a single GPU is explicitly used, this might cause an overload and hence...

  • 209 Views
  • 0 comments
  • 0 kudos
Adam_Pavlacka
by Community Manager
  • 200 Views
  • 0 comments
  • 0 kudos

What is the definition of "scoring" in relation to the default scoring requests limit of 200 QPS? When batching 10 data points to be scored in a single request, is it counted as a single scoring request or as 10?

It is treated as a single request when batching 10 data points. This is because the batching process involves making a single endpoint request to score multiple data points simultaneously.

  • 200 Views
  • 0 comments
  • 0 kudos
Labels
Top Contributors