I've got a lot of large CSV files (> 1 GB) that updates regularly (stored in Data Lake Gen 2). The task is to concatenate these files into a single dataframe that is written to parquet format. However, since these files updates very often I get a read error. I've tested with both batch and streaming (autoloader). I think perhaps the only way to deal with this is to create a copy (snapshot) of the files, then process in batch. However that takes a very long time - ideally I would like to avoid that extra step if possible.
I've been stuck with this issue for two days now, so any help here is much appreciated.