โ10-08-2021 10:04 AM
I have a dataframe with the following columns:
For the key tuple (Key1, Key2), I have rows with Y_N_Col = "Y" and Y_N_Col = "N".
I need a new dataframe with all rows with Y_N_Col = "Y" (regardless of the key tuple), plus all Y_N_Col = "N" for which there are no Y_N_Col = "Y" for the same key tuple.
The dataframe is already calculated in a Scala notebook.
Thanks in advance,
Tiago R.
โ10-11-2021 03:21 AM
I'd use a left-anti join.
So create a df with all the Y, then create a df with all the N and do a left_anti join (on key1 and key2) on the df with the Y.
then a union of those two.
โ10-08-2021 01:43 PM
Hello, @Tiago Renteโ! My name is Piper and I'm a moderator for Databricks. It's great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will follow up shortly with a response.
โ10-11-2021 03:21 AM
I'd use a left-anti join.
So create a df with all the Y, then create a df with all the N and do a left_anti join (on key1 and key2) on the df with the Y.
then a union of those two.
โ10-12-2021 03:34 AM
Hi werners,
Thanks for your answer.
I implemented your suggestion and the solution that I was seeking, but not sure which one is more performant.
The solution I was seeking is:
// My data is stored in the dfDups
// Create a Temp View
dfDups
.createOrReplaceTempView("Dups")
// Create a new df without the "duplicates"
val dfNoDups = sqlContext.sql("""
select *
from Dups as Y
where Y.Y_N_Col = 'Y'
union all
select *
from Dups as N
where N.Y_N_Col = 'N'
and not exists (
select 1
from Dups as Y
where Y.Y_N_Col = 'Y'
and Y.Key1 = N.Key1
and Y.Key2 = N.Key2
)
""")
Thanks,
Tiago R.
โ10-12-2021 04:08 AM
I am not sure. In spark 2, the where not exists was actually planned using a left_anti join. In spark 3 I don't know if this has changed.
But you can display the query plan for both solutions (and try them both).
โ10-20-2021 02:47 PM
Yes- Spark Catalyst optimizer is smart. It is possible that both query plans will actually have the same plan after the optimizers gets done with it. You can get the plan using:
dfNoDups.explain()
โ10-21-2021 01:06 AM
Thanks for your answer, I did not know of the explain.
I did some tests and they execute in similar times.
I ended up using the solution suggested by werners, because would easier to understand and maintain in the future.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group