cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Issues reading json files with databricks vs oss pyspark

aalanis
New Contributor II

Hi Everyone, 

I'm currently developing an application in which I read json files with nested structure. I developed my code locally on my laptop using the opensource version of pyspark (3.5.1) using a similar code to this:

sample_schema:

schema = StructType([StructField('DATA', StructType([StructField('-ext_end', StringType(), True), StructField('-ext_start', StringType(), True), StructField('-xml_creation_date', StringType(), True), StructField('FILE', ArrayType(StructType([StructField('F', StructType([StructField('ROW', StructType([StructField('F1', StringType(), True)]), True)]), True), StructField('G', StructType([StructField('ROW', StructType([StructField('G1', StringType(), True), StructField('G2', StringType(), True), StructField('G3', StringType(), True), StructField('G4', StringType(), True)]), True)]), True)]), True), True)]), True)])

 

#Json reader

spark.readStream.json(path="input", schema=schema, multiLine="true")

Test scenario:

1. the input files sometimes are not complete e.g.:

the F structfield sometimes will be empty and if we load the file using schema inference this results if F as string = null

-> when reading this incomplete data on the OSS version of spark The schema is applied correctly and if the file data is incomplete as suggested above the fields are populated with the default values, this means F1 will be populated with null.

However executing this same code on databricks it results in a null overall.

sample outputs:
OSS pyspark
|DATA|
|{"-ext_end":"sample", "-ext_start":"sample", ...}}

Databricks
|DATA|
|null|

 

Is there a way to replicate the behavior on the OSS version of pyspark on databricks? what am I missing here?

I hope someone can point me in the right direction, Thanks!

 

3 REPLIES 3

raphaelblg
Databricks Employee
Databricks Employee

Hi @aalanis, I'd like to try replicating your scenario. Do you mind sharing a sample file so I can test it locally?

Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

sushmithajk
New Contributor II

Hi, I'd like to try the scenario and find a solution. Would you mind sharing a sample file?

 

Please try to use the extract method to explode the JSON by extracting specific fields and handling optional nested fields by using the method for special scenarios  - when(col("file. F").isNull(), None).otherwise(col("file.F.ROW.F1")).alias("F1")

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group