@Dineshkumar Gopalakrishnan using python's exec() function can be used to execute a python statement, which in your case could be pyspark union statement. Refer below sample code snippet for your reference.
df1 = spark.sparkContext.parallelize([(1, 2, ["1", "2", "3"]), (1, 3, ["4", "1", "5", "6"]) , (2, 4, ["2"]),(2, 5, ["3"])]).toDF(["store", "count", "values"])
df2 = spark.sparkContext.parallelize([(3, 2, ["1", "2", "3"]), (3, 3, ["4", "1", "5", "6"]) , (4, 4, ["2"]),(4, 5, ["3"])]).toDF(["store", "count", "values"])
union_statment = "df = df1.union(df2)"
exec(union_statment)
Above code will execute the pyspark union api on df1 and df2 and will assign the result to dataframe 'df'.
You can have more complex union statment as part of your dynamic string