Hello @vijaykumarbotla , I hope you're doing well.
This is probably because both DataFrames contain a column with the same name, and Spark is unable to determine which one you are referring to in the select statement.
To resolve this issue, you can use the alias method to give each DataFrame a unique alias, and then use the qualified column name (i.e., aliasName.columnName) in the select statement. Here's an example of how you can modify your code to fix the issue:
reguhjoin = reguhjoin.alias("reguhjoin")
bseg_4j_c2 = bseg_4j_c2.alias("bseg_4j_c2")
reguhjoin_joined = reguhjoin.join(bseg_4j_c2, reguhjoin.conc2 == bseg_2['Concatenate 2'], how='left')
reguhjoin_joined_selected = reguhjoin_joined.select(reguhjoin_joined["reguhjoin.*"], bseg_4j_c2["Is There a PO"])
If possible please test the provided code, in case it doesn't work, please let me know the difference between the environments you are running this code.
Best regards,
Lucas Rocha