โ01-06-2022 10:15 PM
About Cloud Fetch mentioned in this article:
Are there any public APIs that can be called directly without ODBC or JDBC drivers?
Thanks.
โ05-24-2022 04:07 AM
Hi @edwardh@kasoftware.cnโ ,
The ODBC driver version 2.6.17 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage set up in your Azure Databricks deployment.
To extract query results using this format, you need Databricks Runtime 8.3 or above.
Query results are uploaded to an internal DBFS storage location as arrow-serialized files of up to 20 MB. Azure Databricks generates and returns shared access signatures to the uploaded files when the driver sends fetch requests after query completion. The ODBC driver then uses the URLs to download the results directly from DBFS.
Cloud Fetch is only used for query results more significant than 1 MB. More minor effects are retrieved directly from Azure Databricks.
Azure Databricks automatically collects the accumulated files marked for deletion after 24 hours. These marked files are wholly deleted after an additional 24 hours.
To learn more about the Cloud Fetch architecture, see How We Achieved High-bandwidth Connectivity With BI Tools.
Here are some similar threads with some fantastic conversations on Cloud Fetch:-
โ01-07-2022 09:55 AM
Hi @ edwardh@kasoftware.cn! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.
โ01-11-2022 08:21 PM
Actually we need connector for ani BI tools or for similar connectivity. @edwardh@kasoftware.cnโ
โ05-24-2022 06:23 PM
Thank you for the explanation.
โ05-24-2022 02:09 AM
Hi @Kaniz Fatmaโ, can you please give some help on this question? Thanks ๐
โ05-24-2022 04:07 AM
Hi @edwardh@kasoftware.cnโ ,
The ODBC driver version 2.6.17 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage set up in your Azure Databricks deployment.
To extract query results using this format, you need Databricks Runtime 8.3 or above.
Query results are uploaded to an internal DBFS storage location as arrow-serialized files of up to 20 MB. Azure Databricks generates and returns shared access signatures to the uploaded files when the driver sends fetch requests after query completion. The ODBC driver then uses the URLs to download the results directly from DBFS.
Cloud Fetch is only used for query results more significant than 1 MB. More minor effects are retrieved directly from Azure Databricks.
Azure Databricks automatically collects the accumulated files marked for deletion after 24 hours. These marked files are wholly deleted after an additional 24 hours.
To learn more about the Cloud Fetch architecture, see How We Achieved High-bandwidth Connectivity With BI Tools.
Here are some similar threads with some fantastic conversations on Cloud Fetch:-
โ05-24-2022 06:24 PM
It's very detailed, thank you for the explanation!
โ05-25-2022 12:21 AM
Hi @edwardh@kasoftware.cnโ , Thank you for the update. Would you mind marking the answer as the best in that case, please?
โ05-25-2022 12:27 AM
Sure.โ
โ05-25-2022 01:00 AM
Thank you @edwardh@kasoftware.cnโ !
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group