cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How do you use cloud fetch?

aldrich_ang
New Contributor II

We're trying to pull a big amount of data using databricks sql and seem to have a bottleneck on network throughput when fetching the data.

I see there's a new feature called cloud fetch and this seems to be the perfect solution for our issue. But I don't see any documentation on how to use this feature.

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

Clud fetch is architecture inside ODBC driver. To use it you need just latest ODBC driver https://databricks.com/blog/2021/08/11/how-we-achieved-high-bandwidth-connectivity-with-bi-tools.htm...

Big amount in sql what exactly is big? Maybe some partitioning, multi cluster and some load in chunks could help.

View solution in original post

4 REPLIES 4

Hubert-Dudek
Esteemed Contributor III

Clud fetch is architecture inside ODBC driver. To use it you need just latest ODBC driver https://databricks.com/blog/2021/08/11/how-we-achieved-high-bandwidth-connectivity-with-bi-tools.htm...

Big amount in sql what exactly is big? Maybe some partitioning, multi cluster and some load in chunks could help.

aldrich_ang
New Contributor II

Is there any way we confirm it's using cloud fetch?

I'm not sure what's the exact size, but it's up to 100s of GB of data across multiple queries.

Looking at the metrics from the VM that's executing the query, the max throughput is 60MBps

It doesn't seem to match the throughput seen in the document. It's closer to the Baseline with single threaded. I'm using 2.6.19 ODBC driver

Here's sample execution details for one query

image

Hi @Aldrich Angโ€‹ , The ODBC driver version 2.6.17 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage set up in your Azure Databricks deployment.

To extract query results using this format, you need Databricks Runtime 8.3 or above.

Query results are uploaded to an internal DBFS storage location as arrow-serialized files of up to 20 MB. Azure Databricks generates and returns shared access signatures to the uploaded files when the driver sends fetch requests after query completion. The ODBC driver then uses the URLs to download the results directly from DBFS.

Cloud Fetch is only used for query results more significant than 1 MB. More minor effects are retrieved directly from Azure Databricks.

Azure Databricks automatically collects the accumulated files marked for deletion after 24 hours. These marked files are wholly deleted after an additional 24 hours.

To learn more about the Cloud Fetch architecture, see How We Achieved High-bandwidth Connectivity With BI Tools.

-werners-
Esteemed Contributor III

Trying to get an idea of what you are trying:

so you query directly on a database of +100GB or is it parquet/delta source?

Also, where is the result fetched to? File download, BI tool, ...?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group