I want to pass multiple column as argument to pivot a dataframe in pyspark pivot like
mydf.groupBy("id").pivot("day","city").agg(F.sum("price").alias("price"),F.sum("units").alias("units")).show().
One way I found is to create multiple df with different pivot and join them which will result in multiple scan. But is there any other way to do this?