Seems like you can convert between dataframes and Arrow objects by using Pandas as an intermediary, but there are some limitations (e.g. it collects all records in the DataFrame to the driver and should be done on a small subset of the data, you hit type conversion warnings and run out of memory).
What's a more efficient and optimized way to convert from PySpark to Arrow?