03-28-2022 12:47 PM
With in a loop I have few dataframes created. I can union them with out an issue if they have same schema using (df_unioned = reduce(DataFrame.unionAll, df_list). Now my problem is how to union them if one of the dataframe in df_list has different number of columns? I thought, reduce(df_unioned=DataFrame.unionByName, df_list, allowMissingColumns=True) would solve the issue but it is giving me error: reduce() takes no keyword arguments. Thanks in advance. Let me know if you need any details in the question.
03-31-2022 08:19 AM
@Joseph Kambourakis I found a way to achieve this. using the function
def union_all(dfs):
if len(dfs) > 1:
return dfs[0].unionByName(union_all(dfs[1:]), allowMissingColumns=True)
else:
return dfs[0]
03-29-2022 05:00 AM
Union doesn't work if they have different schemas and columns. If you do need to union dataframes with different schemas, just add columns of nulls for anything missing to get them to the same schema.
03-31-2022 08:19 AM
@Joseph Kambourakis I found a way to achieve this. using the function
def union_all(dfs):
if len(dfs) > 1:
return dfs[0].unionByName(union_all(dfs[1:]), allowMissingColumns=True)
else:
return dfs[0]
04-01-2022 05:37 AM
Awesome!!
@Kris Koirala , Thanks for sharing your solution here !!
07-23-2023 08:47 PM - edited 07-23-2023 08:58 PM
Hi,
I have come across same scenario, using reduce() and unionByname we can implement the solution as below:
val lstDF: List[Datframe] = List(df1,df2,df3,df4,df5)
val combinedDF = lstDF.reduce((df1, df2) => df1.unionByName(df2, allowMissingColumns = true))
#Scala # Spark #multiple schema
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group