I have a feature table in BQ that I want to ingest into Delta Lake. This feature table in BQ has 100TB of data. This table can be partitioned by DATE.
What best practices and approaches can I take to ingest this 100TB? In particular, what can I do to distribute the write to Delta Lake to the workers and minimize memory pressure on the driver?