cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

AttributeError: 'NoneType' object has no attribute 'enum_types_by_name'

eshaanpathak
New Contributor III

I run into this error while using MLFlow:

AttributeError: 'NoneType' object has no attribute 'enum_types_by_name'

Here is the relevant stack trace:

/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/tracking/fluent.py in log_artifacts(local_dir, artifact_path)
    724     """
    725     run_id = _get_or_start_run().info.run_id
--> 726     MlflowClient().log_artifacts(run_id, local_dir, artifact_path)
    727 
    728 
 
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/tracking/client.py in log_artifacts(self, run_id, local_dir, artifact_path)
    999             is_dir: True
   1000         """
-> 1001         self._tracking_client.log_artifacts(run_id, local_dir, artifact_path)
   1002 
   1003     @contextlib.contextmanager
 
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py in log_artifacts(self, run_id, local_dir, artifact_path)
    344         :param artifact_path: If provided, the directory in ``artifact_uri`` to write to.
    345         """
--> 346         self._get_artifact_repo(run_id).log_artifacts(local_dir, artifact_path)
    347 
    348     def list_artifacts(self, run_id, path=None):
 
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py in _get_artifact_repo(self, run_id)
    312                 run.info.artifact_uri, self.tracking_uri
    313             )
--> 314             artifact_repo = get_artifact_repository(artifact_uri)
    315             # Cache the artifact repo to avoid a future network call, removing the oldest
    316             # entry in the cache if there are too many elements
 
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/store/artifact/artifact_repository_registry.py in get_artifact_repository(artifact_uri)
    105              requirements.
    106     """
--> 107     return _artifact_repository_registry.get_artifact_repository(artifact_uri)
 
/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/mlflow/store/artifact/artifact_repository_registry.py in get_artifact_repository(self, artifact_uri)
     71                 )
     72             )
---> 73         return repository(artifact_uri)
     74 
     75 
 
/databricks/python/lib/python3.9/site-packages/mlflow_databricks_artifacts/store/entrypoint.py in dbfs_artifact_repo_factory(artifact_uri)
     52     # entrypoint function rather than the top-level module. Otherwise, entrypoint
     53     # registration fails with import errors
---> 54     from mlflow_databricks_artifacts.store.artifact_repo import (
     55         DatabricksArtifactRepository,
     56     )
 
/databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level)
    169             # Import the desired module. If you're seeing this while debugging a failed import,
    170             # look at preceding stack frames for relevant error information.
--> 171             original_result = python_builtin_import(name, globals, locals, fromlist, level)
    172 
    173             is_root_import = thread_local._nest_level == 1
 
/databricks/python/lib/python3.9/site-packages/mlflow_databricks_artifacts/store/artifact_repo.py in <module>
     23 )
     24 
---> 25 from mlflow_databricks_artifacts.protos.patched_databricks_artifacts_pb2 import (
     26     DatabricksMlflowArtifactsService,
     27     GetCredentialsForWrite,
 
/databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level)
    169             # Import the desired module. If you're seeing this while debugging a failed import,
    170             # look at preceding stack frames for relevant error information.
--> 171             original_result = python_builtin_import(name, globals, locals, fromlist, level)
    172 
    173             is_root_import = thread_local._nest_level == 1
 
/databricks/python/lib/python3.9/site-packages/mlflow_databricks_artifacts/protos/patched_databricks_artifacts_pb2.py in <module>
     22 DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\"patched_databricks_artifacts.proto\x12\x1bmlflow_databricks_artifacts\x1a\x15scalapb/scalapb.proto\x1a\x10\x64\x61tabricks.proto\"\x89\x02\n\x16\x41rtifactCredentialInfo\x12\x0e\n\x06run_id\x18\x01 \x01(\t\x12\x0c\n\x04path\x18\x02 \x01(\t\x12\x12\n\nsigned_uri\x18\x03 \x01(\t\x12O\n\x07headers\x18\x04 \x03(\x0b\x32>.mlflow_databricks_artifacts.ArtifactCredentialInfo.HttpHeader\x12\x41\n\x04type\x18\x05 \x01(\x0e\x32\x33.mlflow_databricks_artifacts.ArtifactCredentialType\x1a)\n\nHttpHeader\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t\"\xaa\x02\n\x15GetCredentialsForRead\x12\x14\n\x06run_id\x18\x01 \x01(\tB\x04\xf8\x86\x19\x01\x12\x0c\n\x04path\x18\x02 \x03(\t\x12\x12\n\npage_token\x18\x03 \x01(\t\x1ax\n\x08Response\x12M\n\x10\x63redential_infos\x18\x02 \x03(\x0b\x32\x33.mlflow_databricks_artifacts.ArtifactCredentialInfo\x12\x17\n\x0fnext_page_token\x18\x03 \x01(\tJ\x04\x08\x01\x10\x02:_\xe2?(\n&com.databricks.rpc.RPC[$this.Response]\xe2?1\n/com.databricks.mlflow.api.MlflowTrackingMessage\"\xab\x02\n\x16GetCredentialsForWrite\x12\x14\n\x06run_id\x18\x01 \x01(\tB\x04\xf8\x86\x19\x01\x12\x0c\n\x04path\x18\x02 \x03(\t\x12\x12\n\npage_token\x18\x03 \x01(\t\x1ax\n\x08Response\x12M\n\x10\x63redential_infos\x18\x02 \x03(\x0b\x32\x33.mlflow_databricks_artifacts.ArtifactCredentialInfo\x12\x17\n\x0fnext_page_token\x18\x03 \x01(\tJ\x04\x08\x01\x10\x02:_\xe2?(\n&com.databricks.rpc.RPC[$this.Response]\xe2?1\n/com.databricks.mlflow.api.MlflowTrackingMessage*s\n\x16\x41rtifactCredentialType\x12\x11\n\rAZURE_SAS_URI\x10\x01\x12\x15\n\x11\x41WS_PRESIGNED_URL\x10\x02\x12\x12\n\x0eGCP_SIGNED_URL\x10\x03\x12\x1b\n\x17\x41ZURE_ADLS_GEN2_SAS_URI\x10\x04\x32\xb8\x03\n DatabricksMlflowArtifactsService\x12\xc6\x01\n\x15getCredentialsForRead\x12\x32.mlflow_databricks_artifacts.GetCredentialsForRead\x1a;.mlflow_databricks_artifacts.GetCredentialsForRead.Response\"<\xf2\x86\x19\x38\n4\n\x04POST\x12&/mlflow/artifacts/credentials-for-read\x1a\x04\x08\x02\x10\x00\x10\x03\x12\xca\x01\n\x16getCredentialsForWrite\x12\x33.mlflow_databricks_artifacts.GetCredentialsForWrite\x1a<.mlflow_databricks_artifacts.GetCredentialsForWrite.Response\"=\xf2\x86\x19\x39\n5\n\x04POST\x12\'/mlflow/artifacts/credentials-for-write\x1a\x04\x08\x02\x10\x00\x10\x03\x42,\n\x1f\x63om.databricks.api.proto.mlflow\x90\x01\x01\xa0\x01\x01\xe2?\x02\x10\x01')
     23 
---> 24 _ARTIFACTCREDENTIALTYPE = DESCRIPTOR.enum_types_by_name['ArtifactCredentialType']
     25 ArtifactCredentialType = enum_type_wrapper.EnumTypeWrapper(_ARTIFACTCREDENTIALTYPE)
     26 AZURE_SAS_URI = 1

I originally thought the issue was in a dependency of some ML library I'm using and pip installed that library to a certain version, but I'm still running into this issue. I have been using the code (not shown) to train some ML models for the last few months but have been running into this error since the end of last week. My code hasn't changed within the last few months, so I'm sure it's something on the Databricks side. I tried to find this bug online but was unsuccessful. Any thoughts or fixes planned for the future?

3 REPLIES 3

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi, Could you please refer to this to check if this is an issue: https://github.com/protocolbuffers/protobuf/issues/10151, Also, could you please let us know the DBR version you are using?

eshaanpathak
New Contributor III

Hi debayan. If I'm understanding the link correctly, the solution is to make sure that there are no packages or imports imported twice on my end, right? I made sure that was the case, but I am still running into this error. The Databricks Runtime version I am using is "11.0 ML (includes Apache Spark 3.3.0, GPU, Scala 2.12)". The error also occurs when I use "11.3 LTS ML (includes Apache Spark 3.3.0, GPU, Scala 2.12)".

Kaniz
Community Manager
Community Manager

Hi @Eshaan Pathak​ , We haven’t heard from you on the last response from @Debayan Mukherjee​​, and I was checking back to see if their suggestions helped you.

Or else, If you have any solution, please do share that with the community as it can be helpful to others.

Also, Please don't forget to click on the "Select As Best" button whenever the information provided helps resolve your question.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!