- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 08:04 AM
Hi
I'm fairly new to to Databricks and in some examples, blogs,... I see the cloud_files() function being used. But I'm always unable to find any documentation on it? Is there any reason for this?
And what is the exact use case for the function? Most examples seem to have to do with DLT.
Thanks
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 02:21 PM
Hi @Jefke ,
You can't find any information because it's a deprecated function. You should use read_files instead.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-24-2024 09:12 PM
Hi @Jefke ,
The cloud_files() function in Databricks is part of the Databricks Auto Loader, a tool used for incremental data ingestion from cloud storage like Azure Blob Storage, Amazon S3, or Google Cloud Storage. This function is specifically optimized for streaming or continuous loading of files, making it popular in Delta Live Tables (DLT) pipelines and other data engineering workflows.
India .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 08:32 AM
Hi, cloud_files functions is releated to autoloader. You can use it to read from checkpoint folder files that autoloader extracted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 10:56 AM
Hi, you are talking about the cloud_files_state function whereas I was referring to the cloud_files function. You see it sometimes being used in examples in the docs like in the example below. If you search for it in the docs, you mostly end up with the cloud_files_state function you did mention. But that's something completely different.
Is this a deprecated function? If so, when was it announced? I was just wondering why you often see it being used in examples but there is no trace of it in the docs...
You’ve gotten familiar with Delta Live Tables (DLT) via the quickstart and getting started guide. Now it’s time to tackle creating a DLT data pipeline for your cloud storage–with one line of code. Here’s how it’ll look when you're starting:
CREATE OR REFRESH STREAMING LIVE TABLE <table_name>
AS SELECT * FROM cloud_files('<cloud storage location>', '<format>')
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 02:21 PM
Hi @Jefke ,
You can't find any information because it's a deprecated function. You should use read_files instead.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2024 08:36 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-24-2024 09:12 PM
Hi @Jefke ,
The cloud_files() function in Databricks is part of the Databricks Auto Loader, a tool used for incremental data ingestion from cloud storage like Azure Blob Storage, Amazon S3, or Google Cloud Storage. This function is specifically optimized for streaming or continuous loading of files, making it popular in Delta Live Tables (DLT) pipelines and other data engineering workflows.
India .

