cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to implement the where not exists pattern in scala?

tarente
New Contributor III

I have a dataframe with the following columns:

  • Key1
  • Key2
  • Y_N_Col
  • Col1
  • Col2

For the key tuple (Key1, Key2), I have rows with Y_N_Col = "Y" and Y_N_Col = "N".

I need a new dataframe with all rows with Y_N_Col = "Y" (regardless of the key tuple), plus all Y_N_Col = "N" for which there are no Y_N_Col = "Y" for the same key tuple.

The dataframe is already calculated in a Scala notebook.

Thanks in advance,

Tiago R.

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

I'd use a left-anti join.

So create a df with all the Y, then create a df with all the N and do a left_anti join (on key1 and key2) on the df with the Y.

then a union of those two.

View solution in original post

6 REPLIES 6

Anonymous
Not applicable

Hello, @Tiago Rente​! My name is Piper and I'm a moderator for Databricks. It's great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will follow up shortly with a response.

-werners-
Esteemed Contributor III

I'd use a left-anti join.

So create a df with all the Y, then create a df with all the N and do a left_anti join (on key1 and key2) on the df with the Y.

then a union of those two.

tarente
New Contributor III

Hi werners,

Thanks for your answer.

I implemented your suggestion and the solution that I was seeking, but not sure which one is more performant.

The solution I was seeking is:

// My data is stored in the dfDups
// Create a Temp View
dfDups
  .createOrReplaceTempView("Dups")
 
// Create a new df without the "duplicates"
val dfNoDups = sqlContext.sql("""
  select *
    from Dups as Y
   where Y.Y_N_Col = 'Y'
   union all
  select *
    from Dups as N
   where N.Y_N_Col = 'N'
     and not exists (
                     select 1
                       from Dups as Y
                      where Y.Y_N_Col = 'Y'
                        and Y.Key1 = N.Key1
                        and Y.Key2 = N.Key2
                    )
  """)

Thanks,

Tiago R.

-werners-
Esteemed Contributor III

I am not sure. In spark 2, the where not exists was actually planned using a left_anti join. In spark 3 I don't know if this has changed.

But you can display the query plan for both solutions (and try them both).

Dan_Z
Honored Contributor
Honored Contributor

Yes- Spark Catalyst optimizer is smart. It is possible that both query plans will actually have the same plan after the optimizers gets done with it. You can get the plan using:

dfNoDups.explain()

tarente
New Contributor III

Thanks for your answer, I did not know of the explain.

I did some tests and they execute in similar times.

I ended up using the solution suggested by werners, because would easier to understand and maintain in the future.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.