The API -> Cloud Storage -> Delta is more suitable approach.
Auto Loader helps not to lose any data (it keeps track of discovered files in the checkpoint location using RocksDB to provide exactly-once ingestion guarantees), enables schema inference evolution, supports files metadata and you can easily switch to batch processing using .trigger(once=True) or .trigger(availableNow=True).
In addition, the rescued data column ensures that you never lose or miss out on data during ETL. The rescued data column contains any data that wasn’t parsed, either because it was missing from the given schema, or because there was a type mismatch, or because the casing of the column in the record or file didn’t match with that in the schema. So, if something data is added or changed in a source API you will be able to identify this modification and, so, decide what to do : either adapt the flow to integrate with other columns or just ignore it.
Finally, you will always keep your source files in json format. In that way you can re-process them as you need, export or share in the futur.