Hello everyone! First post on the forums, been stuck at this for awhile now and cannot seem to understand why this is happening. Basically, I have been using a seems to be premade Databricks notebook from Databricks themselves for a DNS Analytics example that uses MLflow and more( Here's the Notebook). I even set this notebook to run in a community edition variant of it as I am using the community edition of Databricks. No matter what I keep getting the same error throwing on command 32 which is:
RestException: RESOURCE_DOES_NOT_EXIST: No file or directory exists on path /FileStore/tables/model.
---------------------------------------------------------------------------
RestException Traceback (most recent call last)
<command-168210906724122> in <module>
4
5 model_path = 'dbfs:/FileStore/tables/model'
----> 6 loaded_model = mlflow.pyfunc.load_model(model_path)
7 spark.udf.register("ioc_detect", loaded_model.predict)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-528b26da-5f61-435c-b5a3-3adc5ab7c638/lib/python3.8/site-packages/mlflow/pyfunc/__init__.py in load_model(model_uri, suppress_warnings)
637 messages will be emitted.
638 """
--> 639 local_path = _download_artifact_from_uri(artifact_uri=model_uri)
640 model_meta = Model.load(os.path.join(local_path, MLMODEL_FILE_NAME))
641
/local_disk0/.ephemeral_nfs/envs/pythonEnv-528b26da-5f61-435c-b5a3-3adc5ab7c638/lib/python3.8/site-packages/mlflow/tracking/artifact_utils.py in _download_artifact_from_uri(artifact_uri, output_path)
77 root_uri = prefix + urllib.parse.urlunparse(parsed_uri)
78
---> 79 return get_artifact_repository(artifact_uri=root_uri).download_artifacts(
80 artifact_path=artifact_path, dst_path=output_path
81 )
Any recommendations or fixes or some direction would be appreciated thank you!
The code that throws this error:
#Load the DGA model. This is a pre-trained model that we will use to enrich our incoming DNS events. You will see how to train this model in a later step.
import mlflow
import mlflow.pyfunc
model_path = 'dbfs:/FileStore/tables/model'
loaded_model = mlflow.pyfunc.load_model(model_path)
spark.udf.register("ioc_detect", loaded_model.predict)<br>