- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2024 09:59 AM
Hello, I've searched around for awhile and didn't find a similar question here or elsewhere, so thought I'd ask...
I'm assessing the storage/access efficiency of Struct type columns in delta tables. I want to know more about how Databricks is storing Struct type field. Can an SME add some details?
Example question I'm looking at: Suppose I add an int field with low cardinality to a Struct column... in columnar database this would be stored/accessed efficiently, I believe... so would it also be stored/accessed efficiently as a field in a Struct column?
Note: I did find a Databricks page describing (maybe) how Apache Arrow is used in Databricks runtime 14+ (link below), but it referenced use in UDFs... I am using Structs in vanilla delta tables and figured that was significantly different.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-26-2024 10:39 AM - edited 11-26-2024 10:43 AM
Delta Lake uses Apache Parquet as the underlying format for its data files.
Spark structs are encoded as Parquet SchemaElements, which are simply wrappers around standard types. What this means is that storage and access characteristics should be identical when interacting with, taking your example, an integer column at the top level of a schema versus an integer field inside of a struct - things like encoding and compression are identical with these two fields.
You can use tools like PyArrow to do a deeper dive into how data is encoded in Parquet, here is some sample code that reads the Parquet footer and returns it in a human readable format:
import pyarrow.parquet as pq
file_path = "/your/path/here/file.zstd.parquet"
parquet_file = pq.ParquetFile(file_path)
schema = parquet_file.schema
schema
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-26-2024 10:39 AM - edited 11-26-2024 10:43 AM
Delta Lake uses Apache Parquet as the underlying format for its data files.
Spark structs are encoded as Parquet SchemaElements, which are simply wrappers around standard types. What this means is that storage and access characteristics should be identical when interacting with, taking your example, an integer column at the top level of a schema versus an integer field inside of a struct - things like encoding and compression are identical with these two fields.
You can use tools like PyArrow to do a deeper dive into how data is encoded in Parquet, here is some sample code that reads the Parquet footer and returns it in a human readable format:
import pyarrow.parquet as pq
file_path = "/your/path/here/file.zstd.parquet"
parquet_file = pq.ParquetFile(file_path)
schema = parquet_file.schema
schema
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
Thank you very much for the thoughful response. Please excuse my belated feedback and thanks!

