cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Fuzzy text matching in Spark

manugarri
New Contributor II

I have a list of client provided data, a list of company names.

I have to match those names with an internal database of company names. The client list can fit in memory (its about 10k elements) but the internal dataset is on hdfs and we use Spark for accesing it.

How could I go and match the client list? I was thinking in doing a matrix (RowMatrix) of N x D elements, n being the number of client elements and D being the length of the internal client list) and compute the similarities pair wise.

How could I do this in Spark? Any help would be more than welcome.

10 REPLIES 10

vida
Contributor II
Contributor II

You can use python libraries in Spark. I suggest using fuzzy-wuzzy for computing the similarities.

Then you just need to join the client list with the internal dataset. If you wanted to make sure you tried every single client list against the internal dataset, then you can do a cartesian join. But there may be a better way to cut down the possibilities so you can use a more efficient join - such as assuming the internal dataset name starts with the same letter as the client list. You can even try multiple passes on the internal dataset and try more complicated logic each time.

Bill_Chambers
Contributor II

I'm not aware of any solution out of the box to be able to do something like this but there are several talks that have been done on the subject which you can find below.

https://spark-summit.org/2015/events/real-time-fuzzy-matching-with-spark-and-elastic-search/

https://spark-summit.org/2014/talk/fuzzy-matching-with-spark

Yeah, those two examples (which are the top ones that appear on google) reference a talk which basically doesnt explain how to implement anything.

PaulExter
New Contributor II

Curious if you ever found a workable solution to this. Your question is still one of the top hits when I Google it. We are facing a similar challenge, where we want to be able to fuzzy match high volume lists of individuals in HDFS / Hive. Thinking of creating something in PySpark, or implementing Elastic, but don't want to reinvent the wheel if there's something already out there. We need to standardize our data before matching as well, but that's another story.

MatiasRotenberg
New Contributor II

Like vida said, you can use python libraries to get text matching algorithms.

You can even register the function and use it as a udf in SQL.

manugarri
New Contributor II

Matias, in my experience using python udfs is tremendously slow.

hansonkx
New Contributor II

for those of you looking for a not very complicated solution, you can use the 2 native spark api Soundex and Levenshtein as your fuzzy matching algorithms.

val joinedDF = accountDF.join( accountDF2, levenshtein(accountDF("name"), accountDF2("name")) < 3 && (accountDF("id") !== accountDF2("id")) )

joinedDF.show

hansonkx
New Contributor II

for those of you who are looking for a not too complicated solution, you can use the two built in spark api soundex and levenshtein

 

val newDF = accountDF.join(
  accountDF2,
  levenshtein(accountDF("name"), accountDF2("name")) < 3 && (accountDF("id") !== accountDF2("id"))
)
newDF.show

Er__Ram_Saran_B
New Contributor II

The great question about Fuzzy text matching in Spark, this is unique topic, and part of fuzzy Logic .

Thanks

Sonal
New Contributor II

You can use Zingg: Spark based open source tool for this https://github.com/zinggAI/zingg

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.