cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Optimal Strategies for downloading large query results with Databricks API

rafal_walisko
New Contributor II

Hi everyone,

I'm currently facing an issue with handling a large amount of data using the Databricks API. Specifically, I have a query that returns a significant volume of data, sometimes resulting in over 200 chunks.

My initial approach was to retrieve the external_link for each chunk within a loop and then download the .csv file containing the data. However, I've encountered a bottleneck where obtaining the external link alone takes a considerable amount of time, leading to many files expiring before they can be downloaded.

I'm wondering if anyone has found an optimal strategy or method for dealing with this problem. For instance, is it feasible to generate and retrieve all the links at once and then download the files in parallel?

Any insights or suggestions would be greatly appreciated.

1 REPLY 1

Datagyan
New Contributor II

I am also facing the same issue now one approach tomorrow i will try I will create a job that using serverless job cluster. Then whenever user will click on download button from UI. This should trigger the job now this job. Will read the table as data frame, then we can write these data frame into Adls gen 2Then we can give the download link of that file and the writing of data frame  Will be in multiple partition, so we have  use colace