I have 50k + parquet files in the in azure datalake and i have mount point as well. I need to read all the files and load into a dataframe. i have around 2 billion records in total and all the files are not having all the columns, column order may different , column data type may different. I have tried merge schema, inferschema , custom schema with all the columns as string data type. nothing is working. Finally i decided to read all the files into a list and the iterating the files to read one by one. Is this fine or any other best solution available?
from pyspark.sql.types import StructType, StructField, StringType
from functools import reduce
schema = StructType([
StructField("COL1", StringType(), nullable=True),
StructField("COL2", StringType(), nullable=True),
StructField("COL3", StringType(), nullable=True),
StructField("COL4", StringType(), nullable=True)
])
files = [file.path for file in dbutils.fs.ls("datalake_path_here")]
dfs = []
def load_data(file_path):
return spark.read.format("parquet").schema(schema).load(file_path)
for file_path in files:
df = load_data(file_path)
dfs.append(df)
final_df = reduce(lambda df1, df2: df1.union(df2), dfs)