Why do Spark MLlib models only accept a vector column as input?

User16826992666
Databricks Employee
Databricks Employee

In other libraries I can just use the feature columns themselves as inputs, why do I need to make a vector out of my features when I use MLlib?

User16826992666
Databricks Employee
Databricks Employee

The modeling algorithms in Spark MLlib will only accept a vectorized column as input. This is done for reasons of efficiency and scaling.

The vector assembler will express the features efficiently using techniques like spark vector, which allow a larger amount of data to be handled with less memory. This helps the modeling algorithms run efficiently even on large data columns.

sean_owen
Databricks Employee
Databricks Employee

Yeah, it's more a design choice. Rather than have every implementation take column(s) params, this is handled once in VectorAssembler for all of them. One way or the other, most implementations need a vector of inputs anyway. VectorAssembler can do some optimizations to use sparse vectors too where applicable.