cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Filter only Delta tables from an S3 folders list

Ovi
New Contributor III

Hello everyone,

From a list of folders on s3, how can I filter which ones are Delta tables, without trying to read each one at a time?

Thanks,

Ovi

1 ACCEPTED SOLUTION

Accepted Solutions

NandiniN
Valued Contributor III
Valued Contributor III

This is an elegant one, it uses isDeltaTable() from DeltaTableUtils and dbutils.

import org.apache.spark.sql.delta.DeltaTableUtils
 
val s3Path = "s3://my-bucket/my-folder"
 
// Get a list of all the folders in the S3 path
val folders = dbutils.fs.ls(s3Path).map(_.path)
 
// Filter out any folders that are Delta tables
val nonDeltaFolders = folders.filter(path => !DeltaTableUtils.isDeltaTable(path))
 
// Print the resulting list of folders
nonDeltaFolders.foreach(println)

Note: this is also available in python (package DeltaTableUtils is DeltaTable)

Thanks & Regards,

Nandini

View solution in original post

5 REPLIES 5

NandiniN
Valued Contributor III
Valued Contributor III

Hello @Ovidiu Eremia​ ,

To filter which folders on S3 contain Delta tables, you can look for the specific files that are associated with Delta tables. Delta Lake stores its metadata in a hidden folder named

_delta_log, which is located at the root of the Delta table. So, you can check for this folder.

In the below code, we first get the S3 bucket and the objects under the specified prefix. We then filter out only those objects that represent Delta tables by checking if their keys end with _delta_log/. Finally, we extract the folder names from the Delta object paths and print the list of folders that contain Delta tables.

import boto3
 
s3 = boto3.resource('s3')
bucket_name = 'your-bucket-name' 
prefix = 'path/to/folders'
 
# Get the S3 bucket and the objects under the specified prefix
bucket = s3.Bucket(bucket_name)
objects = bucket.objects.filter(Prefix=prefix)
 
# Filter out only the objects that represent Delta tables
delta_objects = [obj.key for obj in objects if obj.key.endswith('_delta_log/')]
 
# Extract the folder names from the Delta object paths
delta_folders = [obj.split('_delta_log/')[0] for obj in delta_objects]
 
print(delta_folders)

References: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/collections.html

Hope this helps.

Thanks & Regards,

Nandini

Ovi
New Contributor III

Hi @Nandini N​,

Thank you for your answer which includes also an example.

But for my use case, I need to do it using Scala.

Could you please point me in the direct direction of how to implement this?

Thank you again,

Ovi

NandiniN
Valued Contributor III
Valued Contributor III

This is an elegant one, it uses isDeltaTable() from DeltaTableUtils and dbutils.

import org.apache.spark.sql.delta.DeltaTableUtils
 
val s3Path = "s3://my-bucket/my-folder"
 
// Get a list of all the folders in the S3 path
val folders = dbutils.fs.ls(s3Path).map(_.path)
 
// Filter out any folders that are Delta tables
val nonDeltaFolders = folders.filter(path => !DeltaTableUtils.isDeltaTable(path))
 
// Print the resulting list of folders
nonDeltaFolders.foreach(println)

Note: this is also available in python (package DeltaTableUtils is DeltaTable)

Thanks & Regards,

Nandini

Ovi
New Contributor III

Nice solution @Nandini N​ !

Thanks a lot for pointing it out to me!

Have a nice day,

Ovi

NandiniN
Valued Contributor III
Valued Contributor III

Kudos @Ovidiu Eremia​ , it would really help us if you can select the best answer 🙂