- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-25-2024 01:09 AM
I've got a lot of large CSV files (> 1 GB) that updates regularly (stored in Data Lake Gen 2). The task is to concatenate these files into a single dataframe that is written to parquet format. However, since these files updates very often I get a read error. I've tested with both batch and streaming (autoloader). I think perhaps the only way to deal with this is to create a copy (snapshot) of the files, then process in batch. However that takes a very long time - ideally I would like to avoid that extra step if possible.
I've been stuck with this issue for two days now, so any help here is much appreciated.
- Labels:
-
Spark
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-25-2024 04:14 AM
@Kjetil Since they are getting updated often then IMO making a copy would make sense.
What you could try is to create Microsoft.Storage.BlobCreated event to replicate the .CSV into the secondary bucket.
However, best practice would be to have some kind of incremental approach on the source side - creating a new file instead of appending to the existing one.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-25-2024 04:14 AM
@Kjetil Since they are getting updated often then IMO making a copy would make sense.
What you could try is to create Microsoft.Storage.BlobCreated event to replicate the .CSV into the secondary bucket.
However, best practice would be to have some kind of incremental approach on the source side - creating a new file instead of appending to the existing one.

