Size 100-200 MB is perfect for Spark.
Regarding efficiency, it depends on many factors. If you do a lot of filters on some fields, you can add a bloom filter. If your query is by timestamp, ZORDER will be enough. Suppose your data is queried and divided by some infrequent category that only needs to be imported (for example, finance data ledger for three separate companies). In that case, partitioning per that category is ok, so there will be three files after optimization, for example, 60 MB each (which make sense when we know that only some of the partitions have to be imported).