- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-27-2021 10:22 PM
HI,
I have a daily scheduled job which processes the data and write as parquet file in a specific folder structure like root_folder/{CountryCode}/parquetfiles. Where each day job will write new data for countrycode under the folder for countrycode
I am trying to achieve this by using
dataframe.partitionBy("countryCode").write.parquet(root_Folder)
this is creation a folder structure like
root_folder/countryCode=x/part1-snappy.parquet
root_folder/countryCode=x/part2-snappy.parquet
root_folder/countryCode=y/part1-snappy.parquet
but the coutryCode column is removed from the parquet file.
In my case the parquet file is to be read by external consumers and they expect the coutryCode column in file.
Is there an option to have the column in the file and also in folder path.
- Labels:
-
Column
-
Databricks SQL
-
File
-
Hi
-
Parquet
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2021 01:58 AM
- please try to add .option("mergeSchema", "true")
- in filePath just specify main top folder with partitions (root folder for parquet dataset)
Here is official documentation about partition discovery https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#partition-discovery
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-28-2021 02:45 AM
Most external consumers will read partition as column when are properly configured (for example Azure Data Factory or Power BI).
Only way around is that you will duplicate column with other name (you can not have the same name as it will generate conflicts in appends and reads from many clients):
.withColumn("foo_", col("foo"))
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-28-2021 05:05 AM
Thanks for reply. Can you suggest consumers when reading custom code to read files can get partitional column?
presently consumer is getting list of all files in folder, and filtering out files already processed and then read each new file with
spark.read.format('parquet').load(filePath)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2021 01:58 AM
- please try to add .option("mergeSchema", "true")
- in filePath just specify main top folder with partitions (root folder for parquet dataset)
Here is official documentation about partition discovery https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#partition-discovery
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2021 11:28 AM
Thanks will try and check back in case of any other issue.