Hi @Anonym40 ,
There’s no silver bullet here. It’s rather a matter of opinion. I would prefer the first of the approaches mentioned - that is, a separate process responsible for extracting data from the API and saving it to the data lake. Then, use Auto Loader to process the data into the bronze layer.
With this approach, if there is any need to reload the data, you’ll have it readily available at lake.
Another argument is that for some APIs it is not possible to retrieve data older than a certain period (for example, anything older than one 3 month may no longer be available).
If you were to write the data directly to a table and had a minor bug in the code parsing the response that went unnoticed for a longer time, you would no longer be able to correct the data. If you every time extracting data in unchanged format from source - then you have an easy way to rebuild entire table in case of any issue.