cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Schema Parsing issue when datatype of source field is mapped incorrect

MattM
New Contributor III

I have complex json file which has massive struct column. We regularly have issues when we try to parse this json file by forming our case class to extract the fields from schema. With this approach the issue we are facing is that if one data type of field within the case class is incorrect, the rest of the following fields in that class do not populate in the target. Hope the problem makes sense.

Is there any alternate way? One I can think of is to extract all the fields as string from json file and then do the data type conversion. This adds an extra step. A better solution is appreciated. Thanks.

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

I think solution for your problem is use auto loader stream to read data as it support schema hints. If you don't want to use it as stream is enough to specify there trigger once (so once all json are loaded it will finish a job).

Here is about loading json:

https://docs.databricks.com/spark/latest/structured-streaming/auto-loader-json.html

then you can specify schema hints:

https://docs.databricks.com/spark/latest/structured-streaming/auto-loader-schema.html#schema-hints

additionally you can experiment with different schema evolution options for stream

View solution in original post

5 REPLIES 5

Kaniz
Community Manager
Community Manager

Hi @Matt M​ ! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.

Hubert-Dudek
Esteemed Contributor III

I think solution for your problem is use auto loader stream to read data as it support schema hints. If you don't want to use it as stream is enough to specify there trigger once (so once all json are loaded it will finish a job).

Here is about loading json:

https://docs.databricks.com/spark/latest/structured-streaming/auto-loader-json.html

then you can specify schema hints:

https://docs.databricks.com/spark/latest/structured-streaming/auto-loader-schema.html#schema-hints

additionally you can experiment with different schema evolution options for stream

MattM
New Contributor III

Thanks Hubert! I did have Autoloader as one of the solution and I think this is a viable option to make sure I do not have schema parsing issues.

Anonymous
Not applicable

Hey there, @Matt M​ - If @Hubert Dudek​'s response solved the issue, would you be happy to mark his answer as best? It helps other members find the solution more quickly.

MattM
New Contributor III

Yes, thanks.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.