cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

parquet file to include partitioned column in file

guruv
New Contributor III

HI,

I have a daily scheduled job which processes the data and write as parquet file in a specific folder structure like root_folder/{CountryCode}/parquetfiles. Where each day job will write new data for countrycode under the folder for countrycode

I am trying to achieve this by using

dataframe.partitionBy("countryCode").write.parquet(root_Folder)

this is creation a folder structure like

root_folder/countryCode=x/part1-snappy.parquet

root_folder/countryCode=x/part2-snappy.parquet

root_folder/countryCode=y/part1-snappy.parquet

but the coutryCode column is removed from the parquet file.

In my case the parquet file is to be read by external consumers and they expect the coutryCode column in file.

Is there an option to have the column in the file and also in folder path.

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III
  • please try to add .option("mergeSchema", "true")
  • in filePath just specify main top folder with partitions (root folder for parquet dataset)

Here is official documentation about partition discovery https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#partition-discovery

View solution in original post

4 REPLIES 4

Hubert-Dudek
Esteemed Contributor III

Most external consumers will read partition as column when are properly configured (for example Azure Data Factory or Power BI).

Only way around is that you will duplicate column with other name (you can not have the same name as it will generate conflicts in appends and reads from many clients):

.withColumn("foo_", col("foo"))

guruv
New Contributor III

Thanks for reply. Can you suggest consumers when reading custom code to read files can get partitional column?

presently consumer is getting list of all files in folder, and filtering out files already processed and then read each new file with

spark.read.format('parquet').load(filePath)

Hubert-Dudek
Esteemed Contributor III
  • please try to add .option("mergeSchema", "true")
  • in filePath just specify main top folder with partitions (root folder for parquet dataset)

Here is official documentation about partition discovery https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#partition-discovery

guruv
New Contributor III

Thanks will try and check back in case of any other issue.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.