Hi everyone,
I'm currently facing an issue with handling a large amount of data using the Databricks API. Specifically, I have a query that returns a significant volume of data, sometimes resulting in over 200 chunks.
My initial approach was to retrieve the external_link for each chunk within a loop and then download the .csv file containing the data. However, I've encountered a bottleneck where obtaining the external link alone takes a considerable amount of time, leading to many files expiring before they can be downloaded.
I'm wondering if anyone has found an optimal strategy or method for dealing with this problem. For instance, is it feasible to generate and retrieve all the links at once and then download the files in parallel?
Any insights or suggestions would be greatly appreciated.