Getting this error in experiments tab of databricks notebook.There was an error loading the runs. The experiment resource may no longer exist or you no longer have permission to access it. here is the code I am usingmlflow.tensorflow.autolog()
with m...
Hi @AmanJain1008,Thank you for posting your question in the Databricks Community.Could you kindly check whether you are able to reproduce the issue with the below code examples: # Import Libraries
import pandas as pd
import numpy as np
import mlflow
...
Databricks Community New to Databricks, and R User and trying to figure out how to load a hive table via Sparklyr. The path to the file is https://databricks.xxx.xx.gov/#table/xxx_mydata/mydata_etl (right clicking on the file). I trieddata_tbl <- tb...
Hi @JefferyReichman,Not sure that I completely understood your last question about "where I can read up on this for getting started". However, you can start by running this code in the Databricks community edition notebook.For more details: Link
Error stack trace:TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some o...
Please find the below resolution:Install a protobuf version >3.20 on the cluster. pinned the protobuf==3.20.1 on the Cluster librariesReference: https://github.com/tensorflow/tensorflow/issues/60320
After exploring the feature store and how it works I have some concerns1. With each data refresh, there are possibilities for a change in feature values. Does Databricks feature store allow to alter the feature table in case the feature values have c...
I am still lost on the Spark and Deep Learning model.If I have a (2D) time series that I want to use for e.g. an LSTM model. Then I first convert it to a 3D array and then pass it to the model. This is normally done in memory with numpy. But what hap...
Hi!I guess you've already solved this issue (your question has been posted more than 1 year ago), but maybe you could be interested in readinghttps://learn.microsoft.com/en-gb/azure/databricks/machine-learning/train-model/dl-best-practicesThere are s...
Today there is trending AI more than other technology and we know that it can go vast so that human get benefits fom this like in EV | Smart homes | Highly Optimized PC and in Robotics which is growing rapidly because of bbom in AI.
I am interestd in the Databricks Machine Learning Associate Certification Examination. Any ongoing event vouchers, discounts, or free voucher opportunities available for the Databricks Machine Learning Associate Examination?I would greatly appreciate...
Hi @manupmanoos,Please check the below code on how to load the saved model back from the s3 bucketimport boto3
import os
from keras.models import load_model
# Set credentials and create S3 client
aws_access_key_id = dbutils.secrets.get(scope="<scope...
My company is using Deltalake to extract customer insights and run batch scoring with ML models. I need to expose this data to some microservices thru gRPC and REST APIs. How to do this? I'm thinking to build Spark pipelines to extract teh data, stor...
Hey everyone It's awesome that your company is utilizing Deltalake for extracting customer insights and running batch scoring with ML models. I can totally relate to the excitement and challenges of dealing with data integration for microservices and...
Hi @aishashok,Thank you for posting your question in the Databricks community.Yes, Databricks' new Lakehouse products like Databricks SQL Analytics, SQL Runtime, and Delta Lake can be used for a variety of data engineering and analytics use cases, in...
Hi,I have gone thru the databricks assistant article by Databricks https://docs.databricks.com/notebooks/notebook-assistant-faq.htmlIt clearly states that :Q: How do I enable Databricks Assistant?An account administrator must enable Databricks Assis...
Hi, I'm using Databricks Feature Store to register a custom model using a model wrapper as follows: # Log custom model to MLflow
fs.log_model(
artifact_path="model",
model = production_model,
flavor = mlflow.pyfunc,
training_set = training_s...
Hi @SOlivero Make sure that the model was in fact saved with the provided URI.The latest keyword will retrieve the latest version of the registered model when mlflow.pyfunc.load_model('models:/model_name/latest') is executed, not the highest version....
Hello,Is there a way to integrate Hyperopt with Ray parallelisation? I have a simulation framework which I want to optimise, and each simulation run is set up to be a Ray process, however I am calling one simulation run in the objective function. Thi...
Hi @EmirHodzic Thank you for posting your question in the Databricks community. You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.Here's a link to the documentation for HyperOpt...
Hi, We provisioned the endpoint with 4 DBUs and also disabled the scale_to_zero option. For some reason, it randomly drops to 0 provisioned concurrency. Logs available in the serving endpoint service are not insightful. Currently, we are provisioning...
Hi,I apologize if my question wasn't clear; let me clarify it.We are not using the scale_to_zero option and we are not doing any warmup requests so it should never scale to zero despite traffic or zero traffic right?