03-15-2016 04:09 AM
I have a list of client provided data, a list of company names.
I have to match those names with an internal database of company names. The client list can fit in memory (its about 10k elements) but the internal dataset is on hdfs and we use Spark for accesing it.
How could I go and match the client list? I was thinking in doing a matrix (RowMatrix) of N x D elements, n being the number of client elements and D being the length of the internal client list) and compute the similarities pair wise.
How could I do this in Spark? Any help would be more than welcome.
04-01-2016 10:04 AM
You can use python libraries in Spark. I suggest using fuzzy-wuzzy for computing the similarities.
Then you just need to join the client list with the internal dataset. If you wanted to make sure you tried every single client list against the internal dataset, then you can do a cartesian join. But there may be a better way to cut down the possibilities so you can use a more efficient join - such as assuming the internal dataset name starts with the same letter as the client list. You can even try multiple passes on the internal dataset and try more complicated logic each time.
04-01-2016 10:05 AM
I'm not aware of any solution out of the box to be able to do something like this but there are several talks that have been done on the subject which you can find below.
https://spark-summit.org/2015/events/real-time-fuzzy-matching-with-spark-and-elastic-search/
https://spark-summit.org/2014/talk/fuzzy-matching-with-spark
04-01-2016 10:37 AM
Yeah, those two examples (which are the top ones that appear on google) reference a talk which basically doesnt explain how to implement anything.
08-09-2017 02:14 PM
Curious if you ever found a workable solution to this. Your question is still one of the top hits when I Google it. We are facing a similar challenge, where we want to be able to fuzzy match high volume lists of individuals in HDFS / Hive. Thinking of creating something in PySpark, or implementing Elastic, but don't want to reinvent the wheel if there's something already out there. We need to standardize our data before matching as well, but that's another story.
08-10-2017 12:41 AM
Like vida said, you can use python libraries to get text matching algorithms.
You can even register the function and use it as a udf in SQL.
08-10-2017 01:26 AM
Matias, in my experience using python udfs is tremendously slow.
11-29-2017 11:01 AM
for those of you looking for a not very complicated solution, you can use the 2 native spark api Soundex and Levenshtein as your fuzzy matching algorithms.
val joinedDF = accountDF.join( accountDF2, levenshtein(accountDF("name"), accountDF2("name")) < 3 && (accountDF("id") !== accountDF2("id")) )joinedDF.show
11-29-2017 11:05 AM
for those of you who are looking for a not too complicated solution, you can use the two built in spark api soundex and levenshtein
val newDF = accountDF.join(
accountDF2,
levenshtein(accountDF("name"), accountDF2("name")) < 3 && (accountDF("id") !== accountDF2("id"))
)
newDF.show
06-04-2019 08:47 PM
The great question about Fuzzy text matching in Spark, this is unique topic, and part of fuzzy Logic .
Thanks
09-14-2021 12:13 AM
You can use Zingg: Spark based open source tool for this https://github.com/zinggAI/zingg
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group