cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

One-hot encoding of strong cardinality features failing, causes downstream issues

rtreves
New Contributor III

Hi Databricks support,

I'm training an ML model using mlflow on DBR 13.3 LTS ML, Spark 3.4.1 using databricks.automl_runtime 0.2.17 and databricks.automl 1.20.3, with shap 0.45.1. My training data has two float-type columns with three or fewer unique values, which automl flags for one-hot encoding. My training experiment finishes without error. When I examined the notebook of the best-performing model, I toggled `shap_enabled` to `True` to see the shap values. However, in the cell that produces shap values, the following error is produced: "TypeError: no supported conversion for types: (dtype('O'),)" (full traceback attached). 

From my debugging, I believe the error occurs because the one-hot encoding of the two aforementioned columns fails, leading to object columns being passed to `scipy.sparse.csr_matrix` within the shap package. Indeed, when I go into the training notebook and try to fit the one-hot encoder to the two columns, I get the message "Warning: No categorical columns found. Calling 'transform' will only return input data."

Let me know if a full reprex is needed, and the best way to supply it.

Thanks in advance!

8 REPLIES 8

NandiniN
Databricks Employee
Databricks Employee

Hi @rtreves ,

TypeError: no supported conversion for types: (dtype('O'),)

error means you passed some data type that it doesn’t support, like categorical values (strings probably).

The function expects numeric values, and you provided non-numeric leading to this.

 

rtreves
New Contributor III

@NandiniN I've confirmed the features in question are passed in as pandas series with dtype float. Automl flags these features as potentially categorical precisely because they are passed in as numeric types but have few unique values.

NandiniN
Databricks Employee
Databricks Employee

You earlier mentioned you could share a repro. Can you please do that so that I can check further?

rtreves
New Contributor III

Hi @NandiniN , thanks for taking a look! I'm linking below a reprex notebook in two formats: .py for running as a databricks notebook on DBR LTS, and a .ipynb notebook for running as a jupyter notebook natively (though I haven't tested this format). Let me know if you have issues getting it running.

https://drive.google.com/drive/folders/1V5hMzGlP3-nxXQUc-g8Y2qhZ3hs40ENs?usp=sharing 

lilir5
New Contributor II

Hi there,

I have a same issue after running the AutoML without error. Is there any update on this link? Cheers

rtreves
New Contributor III

No, unfortunately I haven't found any resolution to this issue yet.

rtreves
New Contributor III

@NandiniN Were you able to use my reprex above to investigate this issue at all? Thank you.

NandiniN
Databricks Employee
Databricks Employee

Hi @rtreves , sorry I was not able to investigate on the above. Not sure if you would be able to create a support ticket with Databricks as it may be an involved effort to review the code. 

I do have a suggestion, instead of relying on the automatic one-hot encoding by AutoML, you can manually perform one-hot encoding on these columns. This way, you can ensure that the encoding is correctly applied and the resulting columns are of the appropriate type.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group