Issues reading json files with databricks vs oss pyspark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2024 06:38 AM
Hi Everyone,
I'm currently developing an application in which I read json files with nested structure. I developed my code locally on my laptop using the opensource version of pyspark (3.5.1) using a similar code to this:
sample_schema:
schema = StructType([StructField('DATA', StructType([StructField('-ext_end', StringType(), True), StructField('-ext_start', StringType(), True), StructField('-xml_creation_date', StringType(), True), StructField('FILE', ArrayType(StructType([StructField('F', StructType([StructField('ROW', StructType([StructField('F1', StringType(), True)]), True)]), True), StructField('G', StructType([StructField('ROW', StructType([StructField('G1', StringType(), True), StructField('G2', StringType(), True), StructField('G3', StringType(), True), StructField('G4', StringType(), True)]), True)]), True)]), True), True)]), True)])
#Json reader
spark.readStream.json(path="input", schema=schema, multiLine="true")
Test scenario:
1. the input files sometimes are not complete e.g.:
the F structfield sometimes will be empty and if we load the file using schema inference this results if F as string = null
-> when reading this incomplete data on the OSS version of spark The schema is applied correctly and if the file data is incomplete as suggested above the fields are populated with the default values, this means F1 will be populated with null.
However executing this same code on databricks it results in a null overall.
sample outputs:
OSS pyspark
|DATA|
|{"-ext_end":"sample", "-ext_start":"sample", ...}}
Databricks
|DATA|
|null|
Is there a way to replicate the behavior on the OSS version of pyspark on databricks? what am I missing here?
I hope someone can point me in the right direction, Thanks!
- Labels:
-
Spark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2024 02:42 PM
Hi @aalanis, I'd like to try replicating your scenario. Do you mind sharing a sample file so I can test it locally?
Raphael Balogo
Sr. Technical Solutions Engineer
Databricks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2024 07:03 PM
Hi, I'd like to try the scenario and find a solution. Would you mind sharing a sample file?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2024 07:14 PM
Please try to use the extract method to explode the JSON by extracting specific fields and handling optional nested fields by using the method for special scenarios - when(col("file. F").isNull(), None).otherwise(col("file.F.ROW.F1")).alias("F1")

