05-09-2023 10:50 AM
I have data on s3 and i'm using autoloader to load the data. My json docs have fields which are array of structures.
When I don't specify any schema the whole data is stored as strings even the array of structures are just a blob of string making it difficult to process with pyspark dataframe.
When i do specify a schema to autoload, the whole table is null.
Did anyone face any similar issues?
05-10-2023 06:21 AM
Let's try the following :
cloudfile_options = {
"cloudFiles.format": “json”,
"cloudFiles.schemaLocation": "<path_to_schema_checkpoints_location>",
"cloudFiles.inferColumnTypes":"true"
}
spark.readStream.format("cloudFiles").options(**cloudfile_options).load("<path_to_source_data>")
05-10-2023 06:21 AM
Let's try the following :
cloudfile_options = {
"cloudFiles.format": “json”,
"cloudFiles.schemaLocation": "<path_to_schema_checkpoints_location>",
"cloudFiles.inferColumnTypes":"true"
}
spark.readStream.format("cloudFiles").options(**cloudfile_options).load("<path_to_source_data>")
05-20-2023 10:15 PM
Hi @Rishitha Reddy
Hope everything is going great.
Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you.
Cheers!
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.