Edthehead
Contributor III

We need more info on what kind of data, volume and what the called APi can handle. Calling an API for single records in parallel can be achieved using UDF(see THIS). You need to be careful to batch the records so that the target API can handle the parallel load. If you want to send an entire file via API (assuming file size is within the API limits), you can use Synapse pipeline activity(assuming you are using Azure). Databricks does not have any inbuilt feature for this as far as I know.