cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Is there a better method to join two dataframes and not have a duplicated column?

kruhly
New Contributor II

I would like to keep only one of the columns used to join the dataframes. Using select() after the join does not seem straight forward because the real data may have many columns or the column names may not be known. A simple example below

llist = [('bob', '2015-01-13', 4), ('alice', '2015-04-23',10)] ddf = sqlContext.createDataFrame(llist,['name','date','duration']) print ddf.collect() up_ddf = sqlContext.createDataFrame([('alice', 100),('bob', 23)],['name','upload'])

this keeps both 'name' columns when we only want a one!

df = ddf.join(up_ddf, ddf.name == up_ddf.name) print ddf.collect() display( ddf.select(ddf.name, (ddf.duration/ddf.upload).alias('duration_per_upload')) )

Executing display above causes an ambiguous name error:

org.apache.spark.sql.AnalysisException: Reference 'name' is ambiguous, could be: name#8484, name#8487.

The error can be avoided by using up_ddf.name from the right-hand dataframe in the join

ddf.select(up_ddf.name, ...

but seems awkward. Is there a better method to join two dataframes and get only one 'name' column?

1 ACCEPTED SOLUTION

Accepted Solutions

Bill_Chambers
Contributor II
12 REPLIES 12

bplaster
New Contributor II

As of Spark 1.4, you should be able to just:

val new_ddf = ddf.join(up_ddf, "name")

Similar email thread here

krdeepak
New Contributor II

Looks like in spark 1.5, we don't have df.join functions. There is a top level join functions.

How do I remove the join column once (which appears twice in the joined table, and any aggregate on that column fails)?

JingtaoYun
New Contributor II

This is not efficient especially in case of joining with bunch of columns. It should be removed automatically after join. can't understand why are doing so.

jsharrett
New Contributor II

  1. ddf = ddf.join(up_ddf, ddf.name == up_ddf.name).drop(up_ddf.name)

Bill_Chambers
Contributor II

This is covered in the Databricks Spark FAQ:

http://docs.databricks.com/spark/latest/faq/join-two-dataframes-duplicated-column.html

We'll keep that up to date!

I followed the same way what it is in the above article. But did not work for me.

Both df1 & df2 have the same column set of 1006 count. The result created with 2012 columns.

scala> df1.join(df2, Seq("file_name","post_evar30") )

res24: org.apache.spark.sql.DataFrame = [file_name: string, post_evar30: string ... 2012 more fields]

404 -> didnt keep this article upto date. 😞

jsharrett
New Contributor II

the drop() only removes the specific data frame instance of the column. So if you have:

val new_ddf = ddf.join(up_ddf, "name")

then in new_ddf you have two columns ddf.name and up_ddf.name.

val new_ddf = ddf.join(up_ddf, "name").drop(up_ddf.col("name") will remove that column and only leave ddf.name in new_ddf.

bdas77
New Contributor II

How do I drop duplicate during

How do I drop duplicate column after left_outer/left join . What I noticed drop works for inner join but the same is not working for left join , like here in this case I want drop duplicate join column from right .

val column = right(joinColumn)

val test = left.join(broadcast(right),left(joinColumn) === right(joinColumn),"left_outer)

val newDF = test.drop(column)

Harshil
New Contributor II

noticed similar behavior. Even when specify right_dataframe.col("columnname") in filter condition or drop function it uses leftdatframe.col("columnname") during execution.

Carrod
New Contributor II

In python, we can solve it like:

0693f000007OrmxAAC

And in java, we can use:

public Dataset<Row> join(Dataset<?> right,
                scala.collection.Seq<String> usingColumns,
                String joinType)

more in http://spark.apache.org/docs/2.1.0/api/java/index.html

According to question:

http://stackoverflow.com/questions/35988315/convert-java-list-to-scala-seq

the usingColumns parameter can be defined as ArrayList.

TejuNC
New Contributor II

This is an expected behavior.

DataFrame.join

method is equivalent to SQL join like this

SELECT*FROM a JOIN b ON joinExprs

If you want to ignore duplicate columns just drop them or select columns of interest afterwards. If you want to disambiguate you can use access these using parent

DataFrames

:

val a:DataFrame=???val b:DataFrame=???val joinExprs:Column=???

a.join(b, joinExprs).select(a("id"), b("foo"))// drop equivalent a.alias("a").join(b.alias("b"), joinExprs).drop(b("id")).drop(a("foo"))

or use aliases:

// As for now aliases don't work with drop
a.alias("a").join(b.alias("b"), joinExprs).select($"a.id", $"b.foo")

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.