Delta Lake uses Apache Parquet as the underlying format for its data files.
Spark structs are encoded as Parquet SchemaElements, which are simply wrappers around standard types. What this means is that storage and access characteristics should be identical when interacting with, taking your example, an integer column at the top level of a schema versus an integer field inside of a struct - things like encoding and compression are identical with these two fields.
You can use tools like PyArrow to do a deeper dive into how data is encoded in Parquet, here is some sample code that reads the Parquet footer and returns it in a human readable format:
import pyarrow.parquet as pq
file_path = "/your/path/here/file.zstd.parquet"
parquet_file = pq.ParquetFile(file_path)
schema = parquet_file.schema
schema