Is there anything I can do to increase the memory? Or do you know of a way I could make it not run out of memory? Here is the code block:
dt = datetime.strptime(input_date, "%Y/%m/%d")
buffer_sec = 6
timestamp_start_ms = int((dt.replace(tzinfo=timezone.utc).timestamp() - buffer_sec) * 1000)
timestamp_end_ms = int((timestamp_start_ms + (24 * 3600 * 1000)) + buffer_sec * 2 * 1000)
interpolated_filtered = f"SELECT * FROM `catalog`.default.events \
WHERE timestamp >= {timestamp_start_ms} AND timestamp <= {timestamp_end_ms} ORDER BY timestamp ASC"
interpolated_df = spark.sql(interpolated_filtered).toPandas()