Got it - how about using a UnionAll? I believe this code snippet does what you'd want:
from pyspark.sql import Row
array = [Row(value=1), Row(value=2), Row(value=3)] df = sqlContext.createDataFrame(sc.parallelize(array))
array2 = [Row(value=4), Row(value=5), Row(value=6)] df2 = sqlContext.createDataFrame(sc.parallelize(array2))
two_tables = df.unionAll(df2) two_tables.collect()
>> Out[17]: [Row(value=1), Row(value=2), Row(value=3), Row(value=4), Row(value=5), Row(value=6)]