If you have a large dataset, you might want to export it to a bucket in parquet format from your notebook:%python
df = spark.sql("select * from your_table_name")
df.write.parquet(your_s3_path)
For now ‘CREATE TEMPORARY VIEW’ is the way to go. Once you read from it once, the following reads are going to be cached so it won’t be recomputed every time.