Hi Everyone,
I'm currently developing an application in which I read json files with nested structure. I developed my code locally on my laptop using the opensource version of pyspark (3.5.1) using a similar code to this:
sample_schema:
schema = StructType([StructField('DATA', StructType([StructField('-ext_end', StringType(), True), StructField('-ext_start', StringType(), True), StructField('-xml_creation_date', StringType(), True), StructField('FILE', ArrayType(StructType([StructField('F', StructType([StructField('ROW', StructType([StructField('F1', StringType(), True)]), True)]), True), StructField('G', StructType([StructField('ROW', StructType([StructField('G1', StringType(), True), StructField('G2', StringType(), True), StructField('G3', StringType(), True), StructField('G4', StringType(), True)]), True)]), True)]), True), True)]), True)])
#Json reader
spark.readStream.json(path="input", schema=schema, multiLine="true")
Test scenario:
1. the input files sometimes are not complete e.g.:
the F structfield sometimes will be empty and if we load the file using schema inference this results if F as string = null
-> when reading this incomplete data on the OSS version of spark The schema is applied correctly and if the file data is incomplete as suggested above the fields are populated with the default values, this means F1 will be populated with null.
However executing this same code on databricks it results in a null overall.
sample outputs:
OSS pyspark
|DATA|
|{"-ext_end":"sample", "-ext_start":"sample", ...}}
Databricks
|DATA|
|null|
Is there a way to replicate the behavior on the OSS version of pyspark on databricks? what am I missing here?
I hope someone can point me in the right direction, Thanks!