cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Spark is not able to resolve the columns correctly when joins data frames

Anonymous
Not applicable

Hello all,

I m using pyspark ( python 3.8) over spark3.0 on Databricks. When running this DataFrame join:

next_df = days_currencies_matrix.alias('a').join( data_to_merge.alias('b') , [ 
   days_currencies_matrix.dt == data_to_merge.RATE_DATE, 
   days_currencies_matrix.CURRENCY_CODE == data_to_merge.CURRENCY_CODE ], 'LEFT').\
   select( 
         days_currencies_matrix.CURRENCY_CODE
        ,days_currencies_matrix.dt.alias('RATE_DATE')
        ,data_to_merge.AVGYTD
        ,data_to_merge.ENDMTH
        ,data_to_merge.AVGMTH
        ,data_to_merge.AVGWEEK
        ,data_to_merge.AVGMTD
    )

And Iโ€™m getting this error:

Column AVGYTD#67187, AVGWEEK#67190, ENDMTH#67188, AVGMTH#67189, AVGMTD#67191 are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark is unable to figure out which one. Please alias the Datasets with different names via `Dataset.as` before joining them, and specify the column using qualified name, e.g. `df.as("a").join(df.as("b"), $"a.id" > $"b.id")`. You can also set spark.sql.analyzer.failAmbiguousSelfJoin to false to disable this check.

Which is telling me that the above columns belong to more than one dataset.

Why is that happening? The code is telling to spark exactly the source dataframe; also, the days_currencies_matrix has only 2 columns: dt and CURRENCY_CODE.

Is it because days_currencies_matrix DataFrame actually is built over the data_to_merge? Is that something related to Lazy evaluations or it is a bug?

BTW, this version works with no issues:

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

Ok, I found the point...

the select() is about the next_df columns and I'm addressing them using the wrong way ( using the wrong dataset name).

View solution in original post

4 REPLIES 4

Hubert-Dudek
Esteemed Contributor III

In my opinion problem is in select not join. Please split your code to two steps (join and select).

After join please verify schema using next_df

.schema or next_df.printSchema()

Please verify column names.

If you don't find issue please share here schema of your days_currencies_matrix, data_to_merge

and next_df and I will try to help.

Anonymous
Not applicable

Ok, I found the point...

the select() is about the next_df columns and I'm addressing them using the wrong way ( using the wrong dataset name).

Anonymous
Not applicable

@Alessio Palmaโ€‹ - Howdy! My name is Piper, and I'm a moderator for the community. Would you be happy to mark whichever answer solved your issue so other members may find the solution more quickly?

Anonymous
Not applicable

If it is only about "Selected as Best", today I did it.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group